00:00:00.001 Started by upstream project "autotest-nightly" build number 4273 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3636 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.030 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.030 The recommended git tool is: git 00:00:00.030 using credential 00000000-0000-0000-0000-000000000002 00:00:00.032 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.046 Fetching changes from the remote Git repository 00:00:00.049 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.066 Using shallow fetch with depth 1 00:00:00.066 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.066 > git --version # timeout=10 00:00:00.099 > git --version # 'git version 2.39.2' 00:00:00.099 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.146 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.146 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.274 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.284 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.296 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:03.296 > git config core.sparsecheckout # timeout=10 00:00:03.305 > git read-tree -mu HEAD # timeout=10 00:00:03.319 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:03.335 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:03.336 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:03.422 [Pipeline] Start of Pipeline 00:00:03.434 [Pipeline] library 00:00:03.435 Loading library shm_lib@master 00:00:03.436 Library shm_lib@master is cached. Copying from home. 00:00:03.450 [Pipeline] node 00:00:03.461 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:03.463 [Pipeline] { 00:00:03.474 [Pipeline] catchError 00:00:03.476 [Pipeline] { 00:00:03.489 [Pipeline] wrap 00:00:03.498 [Pipeline] { 00:00:03.505 [Pipeline] stage 00:00:03.507 [Pipeline] { (Prologue) 00:00:03.525 [Pipeline] echo 00:00:03.526 Node: VM-host-WFP7 00:00:03.532 [Pipeline] cleanWs 00:00:03.542 [WS-CLEANUP] Deleting project workspace... 00:00:03.542 [WS-CLEANUP] Deferred wipeout is used... 00:00:03.549 [WS-CLEANUP] done 00:00:03.784 [Pipeline] setCustomBuildProperty 00:00:03.880 [Pipeline] httpRequest 00:00:04.426 [Pipeline] echo 00:00:04.427 Sorcerer 10.211.164.20 is alive 00:00:04.436 [Pipeline] retry 00:00:04.438 [Pipeline] { 00:00:04.451 [Pipeline] httpRequest 00:00:04.455 HttpMethod: GET 00:00:04.455 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.455 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.457 Response Code: HTTP/1.1 200 OK 00:00:04.457 Success: Status code 200 is in the accepted range: 200,404 00:00:04.458 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:04.840 [Pipeline] } 00:00:04.856 [Pipeline] // retry 00:00:04.863 [Pipeline] sh 00:00:05.152 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:05.168 [Pipeline] httpRequest 00:00:05.585 [Pipeline] echo 00:00:05.586 Sorcerer 10.211.164.20 is alive 00:00:05.595 [Pipeline] retry 00:00:05.597 [Pipeline] { 00:00:05.610 [Pipeline] httpRequest 00:00:05.614 HttpMethod: GET 00:00:05.615 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:05.616 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:05.618 Response Code: HTTP/1.1 200 OK 00:00:05.618 Success: Status code 200 is in the accepted range: 200,404 00:00:05.619 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:26.383 [Pipeline] } 00:00:26.400 [Pipeline] // retry 00:00:26.407 [Pipeline] sh 00:00:26.692 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:29.248 [Pipeline] sh 00:00:29.535 + git -C spdk log --oneline -n5 00:00:29.535 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:00:29.535 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:00:29.535 4bcab9fb9 correct kick for CQ full case 00:00:29.535 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:00:29.535 318515b44 nvme/perf: interrupt mode support for pcie controller 00:00:29.556 [Pipeline] writeFile 00:00:29.571 [Pipeline] sh 00:00:29.857 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:29.870 [Pipeline] sh 00:00:30.154 + cat autorun-spdk.conf 00:00:30.154 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.154 SPDK_RUN_ASAN=1 00:00:30.154 SPDK_RUN_UBSAN=1 00:00:30.154 SPDK_TEST_RAID=1 00:00:30.154 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.163 RUN_NIGHTLY=1 00:00:30.165 [Pipeline] } 00:00:30.177 [Pipeline] // stage 00:00:30.192 [Pipeline] stage 00:00:30.193 [Pipeline] { (Run VM) 00:00:30.205 [Pipeline] sh 00:00:30.492 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:30.492 + echo 'Start stage prepare_nvme.sh' 00:00:30.492 Start stage prepare_nvme.sh 00:00:30.492 + [[ -n 0 ]] 00:00:30.492 + disk_prefix=ex0 00:00:30.492 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:00:30.492 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:00:30.492 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:00:30.492 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:30.492 ++ SPDK_RUN_ASAN=1 00:00:30.492 ++ SPDK_RUN_UBSAN=1 00:00:30.492 ++ SPDK_TEST_RAID=1 00:00:30.492 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:30.492 ++ RUN_NIGHTLY=1 00:00:30.492 + cd /var/jenkins/workspace/raid-vg-autotest 00:00:30.492 + nvme_files=() 00:00:30.492 + declare -A nvme_files 00:00:30.492 + backend_dir=/var/lib/libvirt/images/backends 00:00:30.492 + nvme_files['nvme.img']=5G 00:00:30.492 + nvme_files['nvme-cmb.img']=5G 00:00:30.492 + nvme_files['nvme-multi0.img']=4G 00:00:30.492 + nvme_files['nvme-multi1.img']=4G 00:00:30.492 + nvme_files['nvme-multi2.img']=4G 00:00:30.492 + nvme_files['nvme-openstack.img']=8G 00:00:30.492 + nvme_files['nvme-zns.img']=5G 00:00:30.492 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:30.492 + (( SPDK_TEST_FTL == 1 )) 00:00:30.492 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:30.492 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:30.492 + for nvme in "${!nvme_files[@]}" 00:00:30.492 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:30.492 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.492 + for nvme in "${!nvme_files[@]}" 00:00:30.492 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:30.492 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.492 + for nvme in "${!nvme_files[@]}" 00:00:30.492 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:30.492 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:30.492 + for nvme in "${!nvme_files[@]}" 00:00:30.492 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:30.492 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.492 + for nvme in "${!nvme_files[@]}" 00:00:30.492 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:30.492 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.492 + for nvme in "${!nvme_files[@]}" 00:00:30.492 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:30.492 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:30.753 + for nvme in "${!nvme_files[@]}" 00:00:30.753 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:30.753 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:30.753 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:30.753 + echo 'End stage prepare_nvme.sh' 00:00:30.753 End stage prepare_nvme.sh 00:00:30.765 [Pipeline] sh 00:00:31.051 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:31.051 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora39 00:00:31.051 00:00:31.051 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:00:31.051 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:00:31.051 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:00:31.051 HELP=0 00:00:31.051 DRY_RUN=0 00:00:31.051 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:00:31.051 NVME_DISKS_TYPE=nvme,nvme, 00:00:31.051 NVME_AUTO_CREATE=0 00:00:31.051 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:00:31.051 NVME_CMB=,, 00:00:31.051 NVME_PMR=,, 00:00:31.051 NVME_ZNS=,, 00:00:31.051 NVME_MS=,, 00:00:31.051 NVME_FDP=,, 00:00:31.051 SPDK_VAGRANT_DISTRO=fedora39 00:00:31.051 SPDK_VAGRANT_VMCPU=10 00:00:31.051 SPDK_VAGRANT_VMRAM=12288 00:00:31.051 SPDK_VAGRANT_PROVIDER=libvirt 00:00:31.051 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:31.051 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:31.051 SPDK_OPENSTACK_NETWORK=0 00:00:31.051 VAGRANT_PACKAGE_BOX=0 00:00:31.051 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:31.051 FORCE_DISTRO=true 00:00:31.051 VAGRANT_BOX_VERSION= 00:00:31.051 EXTRA_VAGRANTFILES= 00:00:31.051 NIC_MODEL=virtio 00:00:31.051 00:00:31.051 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:00:31.051 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:00:32.961 Bringing machine 'default' up with 'libvirt' provider... 00:00:33.533 ==> default: Creating image (snapshot of base box volume). 00:00:33.533 ==> default: Creating domain with the following settings... 00:00:33.533 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731782476_8c526f5003f249f17100 00:00:33.533 ==> default: -- Domain type: kvm 00:00:33.533 ==> default: -- Cpus: 10 00:00:33.533 ==> default: -- Feature: acpi 00:00:33.533 ==> default: -- Feature: apic 00:00:33.533 ==> default: -- Feature: pae 00:00:33.533 ==> default: -- Memory: 12288M 00:00:33.533 ==> default: -- Memory Backing: hugepages: 00:00:33.533 ==> default: -- Management MAC: 00:00:33.533 ==> default: -- Loader: 00:00:33.533 ==> default: -- Nvram: 00:00:33.533 ==> default: -- Base box: spdk/fedora39 00:00:33.533 ==> default: -- Storage pool: default 00:00:33.533 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731782476_8c526f5003f249f17100.img (20G) 00:00:33.533 ==> default: -- Volume Cache: default 00:00:33.533 ==> default: -- Kernel: 00:00:33.533 ==> default: -- Initrd: 00:00:33.533 ==> default: -- Graphics Type: vnc 00:00:33.533 ==> default: -- Graphics Port: -1 00:00:33.533 ==> default: -- Graphics IP: 127.0.0.1 00:00:33.533 ==> default: -- Graphics Password: Not defined 00:00:33.533 ==> default: -- Video Type: cirrus 00:00:33.533 ==> default: -- Video VRAM: 9216 00:00:33.533 ==> default: -- Sound Type: 00:00:33.533 ==> default: -- Keymap: en-us 00:00:33.533 ==> default: -- TPM Path: 00:00:33.533 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:33.533 ==> default: -- Command line args: 00:00:33.533 ==> default: -> value=-device, 00:00:33.533 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:33.533 ==> default: -> value=-drive, 00:00:33.533 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:00:33.533 ==> default: -> value=-device, 00:00:33.533 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.533 ==> default: -> value=-device, 00:00:33.533 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:33.533 ==> default: -> value=-drive, 00:00:33.533 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:33.533 ==> default: -> value=-device, 00:00:33.533 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.533 ==> default: -> value=-drive, 00:00:33.533 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:33.533 ==> default: -> value=-device, 00:00:33.533 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.533 ==> default: -> value=-drive, 00:00:33.533 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:33.533 ==> default: -> value=-device, 00:00:33.533 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:33.793 ==> default: Creating shared folders metadata... 00:00:33.794 ==> default: Starting domain. 00:00:35.705 ==> default: Waiting for domain to get an IP address... 00:00:50.601 ==> default: Waiting for SSH to become available... 00:00:51.987 ==> default: Configuring and enabling network interfaces... 00:00:58.560 default: SSH address: 192.168.121.41:22 00:00:58.560 default: SSH username: vagrant 00:00:58.560 default: SSH auth method: private key 00:01:01.096 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:09.224 ==> default: Mounting SSHFS shared folder... 00:01:11.127 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:11.127 ==> default: Checking Mount.. 00:01:13.036 ==> default: Folder Successfully Mounted! 00:01:13.036 ==> default: Running provisioner: file... 00:01:13.975 default: ~/.gitconfig => .gitconfig 00:01:14.542 00:01:14.542 SUCCESS! 00:01:14.542 00:01:14.542 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:14.542 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:14.542 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:14.542 00:01:14.551 [Pipeline] } 00:01:14.566 [Pipeline] // stage 00:01:14.577 [Pipeline] dir 00:01:14.578 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:01:14.579 [Pipeline] { 00:01:14.592 [Pipeline] catchError 00:01:14.593 [Pipeline] { 00:01:14.605 [Pipeline] sh 00:01:14.886 + vagrant ssh-config --host vagrant 00:01:14.886 + sed -ne /^Host/,$p 00:01:14.886 + tee ssh_conf 00:01:17.417 Host vagrant 00:01:17.417 HostName 192.168.121.41 00:01:17.417 User vagrant 00:01:17.417 Port 22 00:01:17.417 UserKnownHostsFile /dev/null 00:01:17.417 StrictHostKeyChecking no 00:01:17.417 PasswordAuthentication no 00:01:17.417 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:17.417 IdentitiesOnly yes 00:01:17.417 LogLevel FATAL 00:01:17.417 ForwardAgent yes 00:01:17.417 ForwardX11 yes 00:01:17.417 00:01:17.430 [Pipeline] withEnv 00:01:17.432 [Pipeline] { 00:01:17.445 [Pipeline] sh 00:01:17.725 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:17.725 source /etc/os-release 00:01:17.725 [[ -e /image.version ]] && img=$(< /image.version) 00:01:17.725 # Minimal, systemd-like check. 00:01:17.725 if [[ -e /.dockerenv ]]; then 00:01:17.725 # Clear garbage from the node's name: 00:01:17.725 # agt-er_autotest_547-896 -> autotest_547-896 00:01:17.725 # $HOSTNAME is the actual container id 00:01:17.725 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:17.725 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:17.725 # We can assume this is a mount from a host where container is running, 00:01:17.725 # so fetch its hostname to easily identify the target swarm worker. 00:01:17.725 container="$(< /etc/hostname) ($agent)" 00:01:17.725 else 00:01:17.725 # Fallback 00:01:17.725 container=$agent 00:01:17.725 fi 00:01:17.725 fi 00:01:17.725 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:17.725 00:01:17.997 [Pipeline] } 00:01:18.014 [Pipeline] // withEnv 00:01:18.022 [Pipeline] setCustomBuildProperty 00:01:18.037 [Pipeline] stage 00:01:18.039 [Pipeline] { (Tests) 00:01:18.056 [Pipeline] sh 00:01:18.338 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:18.611 [Pipeline] sh 00:01:18.894 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:19.171 [Pipeline] timeout 00:01:19.171 Timeout set to expire in 1 hr 30 min 00:01:19.173 [Pipeline] { 00:01:19.187 [Pipeline] sh 00:01:19.471 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:20.041 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:20.054 [Pipeline] sh 00:01:20.339 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:20.613 [Pipeline] sh 00:01:20.897 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:21.175 [Pipeline] sh 00:01:21.464 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:01:21.725 ++ readlink -f spdk_repo 00:01:21.725 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:21.725 + [[ -n /home/vagrant/spdk_repo ]] 00:01:21.725 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:21.725 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:21.725 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:21.725 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:21.725 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:21.725 + [[ raid-vg-autotest == pkgdep-* ]] 00:01:21.725 + cd /home/vagrant/spdk_repo 00:01:21.725 + source /etc/os-release 00:01:21.725 ++ NAME='Fedora Linux' 00:01:21.725 ++ VERSION='39 (Cloud Edition)' 00:01:21.725 ++ ID=fedora 00:01:21.725 ++ VERSION_ID=39 00:01:21.725 ++ VERSION_CODENAME= 00:01:21.725 ++ PLATFORM_ID=platform:f39 00:01:21.725 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:21.725 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:21.725 ++ LOGO=fedora-logo-icon 00:01:21.725 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:21.725 ++ HOME_URL=https://fedoraproject.org/ 00:01:21.725 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:21.725 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:21.725 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:21.725 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:21.725 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:21.725 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:21.725 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:21.725 ++ SUPPORT_END=2024-11-12 00:01:21.725 ++ VARIANT='Cloud Edition' 00:01:21.725 ++ VARIANT_ID=cloud 00:01:21.725 + uname -a 00:01:21.725 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:21.725 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:22.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:22.295 Hugepages 00:01:22.296 node hugesize free / total 00:01:22.296 node0 1048576kB 0 / 0 00:01:22.296 node0 2048kB 0 / 0 00:01:22.296 00:01:22.296 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.296 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:22.296 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:22.296 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:22.296 + rm -f /tmp/spdk-ld-path 00:01:22.296 + source autorun-spdk.conf 00:01:22.296 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.296 ++ SPDK_RUN_ASAN=1 00:01:22.296 ++ SPDK_RUN_UBSAN=1 00:01:22.296 ++ SPDK_TEST_RAID=1 00:01:22.296 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.296 ++ RUN_NIGHTLY=1 00:01:22.296 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.296 + [[ -n '' ]] 00:01:22.296 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:22.296 + for M in /var/spdk/build-*-manifest.txt 00:01:22.296 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:22.296 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.296 + for M in /var/spdk/build-*-manifest.txt 00:01:22.296 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.296 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.296 + for M in /var/spdk/build-*-manifest.txt 00:01:22.296 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.296 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:22.296 ++ uname 00:01:22.296 + [[ Linux == \L\i\n\u\x ]] 00:01:22.296 + sudo dmesg -T 00:01:22.556 + sudo dmesg --clear 00:01:22.556 + dmesg_pid=5426 00:01:22.556 + [[ Fedora Linux == FreeBSD ]] 00:01:22.556 + sudo dmesg -Tw 00:01:22.556 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.556 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.556 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.556 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.556 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.556 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.556 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.556 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.556 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.556 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.556 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.556 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.556 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.556 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.556 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.556 18:42:05 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:22.556 18:42:05 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.556 18:42:05 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.556 18:42:05 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:01:22.556 18:42:05 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:01:22.556 18:42:05 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:01:22.556 18:42:05 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:22.556 18:42:05 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:01:22.556 18:42:05 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:22.556 18:42:05 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:22.556 18:42:06 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:22.556 18:42:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:22.556 18:42:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:22.556 18:42:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.556 18:42:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.556 18:42:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.556 18:42:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.556 18:42:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.556 18:42:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.556 18:42:06 -- paths/export.sh@5 -- $ export PATH 00:01:22.556 18:42:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.817 18:42:06 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:22.817 18:42:06 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:22.817 18:42:06 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731782526.XXXXXX 00:01:22.817 18:42:06 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731782526.3IHRhg 00:01:22.817 18:42:06 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:22.817 18:42:06 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:22.817 18:42:06 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:22.817 18:42:06 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:22.817 18:42:06 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.817 18:42:06 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:22.817 18:42:06 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:22.817 18:42:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.817 18:42:06 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:01:22.817 18:42:06 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:22.817 18:42:06 -- pm/common@17 -- $ local monitor 00:01:22.817 18:42:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.817 18:42:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.817 18:42:06 -- pm/common@25 -- $ sleep 1 00:01:22.817 18:42:06 -- pm/common@21 -- $ date +%s 00:01:22.817 18:42:06 -- pm/common@21 -- $ date +%s 00:01:22.817 18:42:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731782526 00:01:22.817 18:42:06 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731782526 00:01:22.817 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731782526_collect-cpu-load.pm.log 00:01:22.817 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731782526_collect-vmstat.pm.log 00:01:23.757 18:42:07 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:23.757 18:42:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.757 18:42:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.757 18:42:07 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:23.757 18:42:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.757 Sat Nov 16 06:42:07 PM UTC 2024 00:01:23.757 18:42:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.757 v25.01-pre-189-g83e8405e4 00:01:23.757 18:42:07 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:23.757 18:42:07 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:23.757 18:42:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:23.757 18:42:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:23.757 18:42:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.757 ************************************ 00:01:23.757 START TEST asan 00:01:23.757 ************************************ 00:01:23.757 using asan 00:01:23.757 18:42:07 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:23.757 00:01:23.757 real 0m0.001s 00:01:23.757 user 0m0.000s 00:01:23.757 sys 0m0.000s 00:01:23.757 18:42:07 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:23.757 18:42:07 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.758 ************************************ 00:01:23.758 END TEST asan 00:01:23.758 ************************************ 00:01:23.758 18:42:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.758 18:42:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.758 18:42:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:23.758 18:42:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:23.758 18:42:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.758 ************************************ 00:01:23.758 START TEST ubsan 00:01:23.758 ************************************ 00:01:23.758 using ubsan 00:01:23.758 18:42:07 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:23.758 00:01:23.758 real 0m0.000s 00:01:23.758 user 0m0.000s 00:01:23.758 sys 0m0.000s 00:01:23.758 18:42:07 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:23.758 18:42:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.758 ************************************ 00:01:23.758 END TEST ubsan 00:01:23.758 ************************************ 00:01:24.017 18:42:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:24.017 18:42:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:24.017 18:42:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:24.017 18:42:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:24.017 18:42:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:24.017 18:42:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:24.017 18:42:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:24.017 18:42:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:24.017 18:42:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:01:24.017 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:24.017 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:24.587 Using 'verbs' RDMA provider 00:01:40.471 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:01:58.580 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:01:58.580 Creating mk/config.mk...done. 00:01:58.580 Creating mk/cc.flags.mk...done. 00:01:58.580 Type 'make' to build. 00:01:58.580 18:42:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:01:58.580 18:42:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:58.580 18:42:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:58.580 18:42:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:58.580 ************************************ 00:01:58.580 START TEST make 00:01:58.580 ************************************ 00:01:58.580 18:42:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:01:58.580 make[1]: Nothing to be done for 'all'. 00:02:06.711 The Meson build system 00:02:06.711 Version: 1.5.0 00:02:06.711 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:06.711 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:06.711 Build type: native build 00:02:06.711 Program cat found: YES (/usr/bin/cat) 00:02:06.711 Project name: DPDK 00:02:06.711 Project version: 24.03.0 00:02:06.711 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.711 C linker for the host machine: cc ld.bfd 2.40-14 00:02:06.711 Host machine cpu family: x86_64 00:02:06.711 Host machine cpu: x86_64 00:02:06.711 Message: ## Building in Developer Mode ## 00:02:06.711 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.711 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:06.711 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.711 Program python3 found: YES (/usr/bin/python3) 00:02:06.711 Program cat found: YES (/usr/bin/cat) 00:02:06.711 Compiler for C supports arguments -march=native: YES 00:02:06.711 Checking for size of "void *" : 8 00:02:06.711 Checking for size of "void *" : 8 (cached) 00:02:06.711 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:06.711 Library m found: YES 00:02:06.711 Library numa found: YES 00:02:06.711 Has header "numaif.h" : YES 00:02:06.711 Library fdt found: NO 00:02:06.711 Library execinfo found: NO 00:02:06.711 Has header "execinfo.h" : YES 00:02:06.711 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.711 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.711 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.711 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.711 Run-time dependency openssl found: YES 3.1.1 00:02:06.711 Run-time dependency libpcap found: YES 1.10.4 00:02:06.711 Has header "pcap.h" with dependency libpcap: YES 00:02:06.711 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.711 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.711 Compiler for C supports arguments -Wformat: YES 00:02:06.711 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.711 Compiler for C supports arguments -Wformat-security: NO 00:02:06.711 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.711 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.711 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.711 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.711 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.711 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.711 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.711 Compiler for C supports arguments -Wundef: YES 00:02:06.711 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.711 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.711 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.711 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.711 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.711 Program objdump found: YES (/usr/bin/objdump) 00:02:06.711 Compiler for C supports arguments -mavx512f: YES 00:02:06.711 Checking if "AVX512 checking" compiles: YES 00:02:06.711 Fetching value of define "__SSE4_2__" : 1 00:02:06.711 Fetching value of define "__AES__" : 1 00:02:06.711 Fetching value of define "__AVX__" : 1 00:02:06.711 Fetching value of define "__AVX2__" : 1 00:02:06.711 Fetching value of define "__AVX512BW__" : 1 00:02:06.711 Fetching value of define "__AVX512CD__" : 1 00:02:06.711 Fetching value of define "__AVX512DQ__" : 1 00:02:06.711 Fetching value of define "__AVX512F__" : 1 00:02:06.711 Fetching value of define "__AVX512VL__" : 1 00:02:06.711 Fetching value of define "__PCLMUL__" : 1 00:02:06.711 Fetching value of define "__RDRND__" : 1 00:02:06.711 Fetching value of define "__RDSEED__" : 1 00:02:06.711 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:06.711 Fetching value of define "__znver1__" : (undefined) 00:02:06.711 Fetching value of define "__znver2__" : (undefined) 00:02:06.711 Fetching value of define "__znver3__" : (undefined) 00:02:06.711 Fetching value of define "__znver4__" : (undefined) 00:02:06.711 Library asan found: YES 00:02:06.711 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.711 Message: lib/log: Defining dependency "log" 00:02:06.711 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.711 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.711 Library rt found: YES 00:02:06.711 Checking for function "getentropy" : NO 00:02:06.711 Message: lib/eal: Defining dependency "eal" 00:02:06.711 Message: lib/ring: Defining dependency "ring" 00:02:06.712 Message: lib/rcu: Defining dependency "rcu" 00:02:06.712 Message: lib/mempool: Defining dependency "mempool" 00:02:06.712 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.712 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.712 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.712 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.712 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.712 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:06.712 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:06.712 Compiler for C supports arguments -mpclmul: YES 00:02:06.712 Compiler for C supports arguments -maes: YES 00:02:06.712 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.712 Compiler for C supports arguments -mavx512bw: YES 00:02:06.712 Compiler for C supports arguments -mavx512dq: YES 00:02:06.712 Compiler for C supports arguments -mavx512vl: YES 00:02:06.712 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.712 Compiler for C supports arguments -mavx2: YES 00:02:06.712 Compiler for C supports arguments -mavx: YES 00:02:06.712 Message: lib/net: Defining dependency "net" 00:02:06.712 Message: lib/meter: Defining dependency "meter" 00:02:06.712 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.712 Message: lib/pci: Defining dependency "pci" 00:02:06.712 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.712 Message: lib/hash: Defining dependency "hash" 00:02:06.712 Message: lib/timer: Defining dependency "timer" 00:02:06.712 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.712 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.712 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.712 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.712 Message: lib/power: Defining dependency "power" 00:02:06.712 Message: lib/reorder: Defining dependency "reorder" 00:02:06.712 Message: lib/security: Defining dependency "security" 00:02:06.712 Has header "linux/userfaultfd.h" : YES 00:02:06.712 Has header "linux/vduse.h" : YES 00:02:06.712 Message: lib/vhost: Defining dependency "vhost" 00:02:06.712 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:06.712 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:06.712 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:06.712 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:06.712 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:06.712 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:06.712 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:06.712 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:06.712 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:06.712 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:06.712 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:06.712 Configuring doxy-api-html.conf using configuration 00:02:06.712 Configuring doxy-api-man.conf using configuration 00:02:06.712 Program mandb found: YES (/usr/bin/mandb) 00:02:06.712 Program sphinx-build found: NO 00:02:06.712 Configuring rte_build_config.h using configuration 00:02:06.712 Message: 00:02:06.712 ================= 00:02:06.712 Applications Enabled 00:02:06.712 ================= 00:02:06.712 00:02:06.712 apps: 00:02:06.712 00:02:06.712 00:02:06.712 Message: 00:02:06.712 ================= 00:02:06.712 Libraries Enabled 00:02:06.712 ================= 00:02:06.712 00:02:06.712 libs: 00:02:06.712 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:06.712 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:06.712 cryptodev, dmadev, power, reorder, security, vhost, 00:02:06.712 00:02:06.712 Message: 00:02:06.712 =============== 00:02:06.712 Drivers Enabled 00:02:06.712 =============== 00:02:06.712 00:02:06.712 common: 00:02:06.712 00:02:06.712 bus: 00:02:06.712 pci, vdev, 00:02:06.712 mempool: 00:02:06.712 ring, 00:02:06.712 dma: 00:02:06.712 00:02:06.712 net: 00:02:06.712 00:02:06.712 crypto: 00:02:06.712 00:02:06.712 compress: 00:02:06.712 00:02:06.712 vdpa: 00:02:06.712 00:02:06.712 00:02:06.712 Message: 00:02:06.712 ================= 00:02:06.712 Content Skipped 00:02:06.712 ================= 00:02:06.712 00:02:06.712 apps: 00:02:06.712 dumpcap: explicitly disabled via build config 00:02:06.712 graph: explicitly disabled via build config 00:02:06.712 pdump: explicitly disabled via build config 00:02:06.712 proc-info: explicitly disabled via build config 00:02:06.712 test-acl: explicitly disabled via build config 00:02:06.712 test-bbdev: explicitly disabled via build config 00:02:06.712 test-cmdline: explicitly disabled via build config 00:02:06.712 test-compress-perf: explicitly disabled via build config 00:02:06.712 test-crypto-perf: explicitly disabled via build config 00:02:06.712 test-dma-perf: explicitly disabled via build config 00:02:06.712 test-eventdev: explicitly disabled via build config 00:02:06.712 test-fib: explicitly disabled via build config 00:02:06.712 test-flow-perf: explicitly disabled via build config 00:02:06.712 test-gpudev: explicitly disabled via build config 00:02:06.712 test-mldev: explicitly disabled via build config 00:02:06.712 test-pipeline: explicitly disabled via build config 00:02:06.712 test-pmd: explicitly disabled via build config 00:02:06.712 test-regex: explicitly disabled via build config 00:02:06.712 test-sad: explicitly disabled via build config 00:02:06.712 test-security-perf: explicitly disabled via build config 00:02:06.712 00:02:06.712 libs: 00:02:06.712 argparse: explicitly disabled via build config 00:02:06.712 metrics: explicitly disabled via build config 00:02:06.712 acl: explicitly disabled via build config 00:02:06.712 bbdev: explicitly disabled via build config 00:02:06.712 bitratestats: explicitly disabled via build config 00:02:06.712 bpf: explicitly disabled via build config 00:02:06.712 cfgfile: explicitly disabled via build config 00:02:06.712 distributor: explicitly disabled via build config 00:02:06.712 efd: explicitly disabled via build config 00:02:06.712 eventdev: explicitly disabled via build config 00:02:06.712 dispatcher: explicitly disabled via build config 00:02:06.712 gpudev: explicitly disabled via build config 00:02:06.712 gro: explicitly disabled via build config 00:02:06.712 gso: explicitly disabled via build config 00:02:06.712 ip_frag: explicitly disabled via build config 00:02:06.712 jobstats: explicitly disabled via build config 00:02:06.712 latencystats: explicitly disabled via build config 00:02:06.712 lpm: explicitly disabled via build config 00:02:06.712 member: explicitly disabled via build config 00:02:06.712 pcapng: explicitly disabled via build config 00:02:06.712 rawdev: explicitly disabled via build config 00:02:06.712 regexdev: explicitly disabled via build config 00:02:06.712 mldev: explicitly disabled via build config 00:02:06.712 rib: explicitly disabled via build config 00:02:06.712 sched: explicitly disabled via build config 00:02:06.712 stack: explicitly disabled via build config 00:02:06.712 ipsec: explicitly disabled via build config 00:02:06.712 pdcp: explicitly disabled via build config 00:02:06.712 fib: explicitly disabled via build config 00:02:06.712 port: explicitly disabled via build config 00:02:06.712 pdump: explicitly disabled via build config 00:02:06.712 table: explicitly disabled via build config 00:02:06.712 pipeline: explicitly disabled via build config 00:02:06.712 graph: explicitly disabled via build config 00:02:06.712 node: explicitly disabled via build config 00:02:06.712 00:02:06.712 drivers: 00:02:06.712 common/cpt: not in enabled drivers build config 00:02:06.712 common/dpaax: not in enabled drivers build config 00:02:06.712 common/iavf: not in enabled drivers build config 00:02:06.712 common/idpf: not in enabled drivers build config 00:02:06.712 common/ionic: not in enabled drivers build config 00:02:06.712 common/mvep: not in enabled drivers build config 00:02:06.712 common/octeontx: not in enabled drivers build config 00:02:06.712 bus/auxiliary: not in enabled drivers build config 00:02:06.712 bus/cdx: not in enabled drivers build config 00:02:06.712 bus/dpaa: not in enabled drivers build config 00:02:06.712 bus/fslmc: not in enabled drivers build config 00:02:06.712 bus/ifpga: not in enabled drivers build config 00:02:06.712 bus/platform: not in enabled drivers build config 00:02:06.712 bus/uacce: not in enabled drivers build config 00:02:06.712 bus/vmbus: not in enabled drivers build config 00:02:06.712 common/cnxk: not in enabled drivers build config 00:02:06.712 common/mlx5: not in enabled drivers build config 00:02:06.712 common/nfp: not in enabled drivers build config 00:02:06.712 common/nitrox: not in enabled drivers build config 00:02:06.712 common/qat: not in enabled drivers build config 00:02:06.712 common/sfc_efx: not in enabled drivers build config 00:02:06.712 mempool/bucket: not in enabled drivers build config 00:02:06.712 mempool/cnxk: not in enabled drivers build config 00:02:06.712 mempool/dpaa: not in enabled drivers build config 00:02:06.712 mempool/dpaa2: not in enabled drivers build config 00:02:06.712 mempool/octeontx: not in enabled drivers build config 00:02:06.712 mempool/stack: not in enabled drivers build config 00:02:06.712 dma/cnxk: not in enabled drivers build config 00:02:06.712 dma/dpaa: not in enabled drivers build config 00:02:06.712 dma/dpaa2: not in enabled drivers build config 00:02:06.712 dma/hisilicon: not in enabled drivers build config 00:02:06.712 dma/idxd: not in enabled drivers build config 00:02:06.712 dma/ioat: not in enabled drivers build config 00:02:06.712 dma/skeleton: not in enabled drivers build config 00:02:06.712 net/af_packet: not in enabled drivers build config 00:02:06.712 net/af_xdp: not in enabled drivers build config 00:02:06.712 net/ark: not in enabled drivers build config 00:02:06.712 net/atlantic: not in enabled drivers build config 00:02:06.712 net/avp: not in enabled drivers build config 00:02:06.712 net/axgbe: not in enabled drivers build config 00:02:06.712 net/bnx2x: not in enabled drivers build config 00:02:06.712 net/bnxt: not in enabled drivers build config 00:02:06.713 net/bonding: not in enabled drivers build config 00:02:06.713 net/cnxk: not in enabled drivers build config 00:02:06.713 net/cpfl: not in enabled drivers build config 00:02:06.713 net/cxgbe: not in enabled drivers build config 00:02:06.713 net/dpaa: not in enabled drivers build config 00:02:06.713 net/dpaa2: not in enabled drivers build config 00:02:06.713 net/e1000: not in enabled drivers build config 00:02:06.713 net/ena: not in enabled drivers build config 00:02:06.713 net/enetc: not in enabled drivers build config 00:02:06.713 net/enetfec: not in enabled drivers build config 00:02:06.713 net/enic: not in enabled drivers build config 00:02:06.713 net/failsafe: not in enabled drivers build config 00:02:06.713 net/fm10k: not in enabled drivers build config 00:02:06.713 net/gve: not in enabled drivers build config 00:02:06.713 net/hinic: not in enabled drivers build config 00:02:06.713 net/hns3: not in enabled drivers build config 00:02:06.713 net/i40e: not in enabled drivers build config 00:02:06.713 net/iavf: not in enabled drivers build config 00:02:06.713 net/ice: not in enabled drivers build config 00:02:06.713 net/idpf: not in enabled drivers build config 00:02:06.713 net/igc: not in enabled drivers build config 00:02:06.713 net/ionic: not in enabled drivers build config 00:02:06.713 net/ipn3ke: not in enabled drivers build config 00:02:06.713 net/ixgbe: not in enabled drivers build config 00:02:06.713 net/mana: not in enabled drivers build config 00:02:06.713 net/memif: not in enabled drivers build config 00:02:06.713 net/mlx4: not in enabled drivers build config 00:02:06.713 net/mlx5: not in enabled drivers build config 00:02:06.713 net/mvneta: not in enabled drivers build config 00:02:06.713 net/mvpp2: not in enabled drivers build config 00:02:06.713 net/netvsc: not in enabled drivers build config 00:02:06.713 net/nfb: not in enabled drivers build config 00:02:06.713 net/nfp: not in enabled drivers build config 00:02:06.713 net/ngbe: not in enabled drivers build config 00:02:06.713 net/null: not in enabled drivers build config 00:02:06.713 net/octeontx: not in enabled drivers build config 00:02:06.713 net/octeon_ep: not in enabled drivers build config 00:02:06.713 net/pcap: not in enabled drivers build config 00:02:06.713 net/pfe: not in enabled drivers build config 00:02:06.713 net/qede: not in enabled drivers build config 00:02:06.713 net/ring: not in enabled drivers build config 00:02:06.713 net/sfc: not in enabled drivers build config 00:02:06.713 net/softnic: not in enabled drivers build config 00:02:06.713 net/tap: not in enabled drivers build config 00:02:06.713 net/thunderx: not in enabled drivers build config 00:02:06.713 net/txgbe: not in enabled drivers build config 00:02:06.713 net/vdev_netvsc: not in enabled drivers build config 00:02:06.713 net/vhost: not in enabled drivers build config 00:02:06.713 net/virtio: not in enabled drivers build config 00:02:06.713 net/vmxnet3: not in enabled drivers build config 00:02:06.713 raw/*: missing internal dependency, "rawdev" 00:02:06.713 crypto/armv8: not in enabled drivers build config 00:02:06.713 crypto/bcmfs: not in enabled drivers build config 00:02:06.713 crypto/caam_jr: not in enabled drivers build config 00:02:06.713 crypto/ccp: not in enabled drivers build config 00:02:06.713 crypto/cnxk: not in enabled drivers build config 00:02:06.713 crypto/dpaa_sec: not in enabled drivers build config 00:02:06.713 crypto/dpaa2_sec: not in enabled drivers build config 00:02:06.713 crypto/ipsec_mb: not in enabled drivers build config 00:02:06.713 crypto/mlx5: not in enabled drivers build config 00:02:06.713 crypto/mvsam: not in enabled drivers build config 00:02:06.713 crypto/nitrox: not in enabled drivers build config 00:02:06.713 crypto/null: not in enabled drivers build config 00:02:06.713 crypto/octeontx: not in enabled drivers build config 00:02:06.713 crypto/openssl: not in enabled drivers build config 00:02:06.713 crypto/scheduler: not in enabled drivers build config 00:02:06.713 crypto/uadk: not in enabled drivers build config 00:02:06.713 crypto/virtio: not in enabled drivers build config 00:02:06.713 compress/isal: not in enabled drivers build config 00:02:06.713 compress/mlx5: not in enabled drivers build config 00:02:06.713 compress/nitrox: not in enabled drivers build config 00:02:06.713 compress/octeontx: not in enabled drivers build config 00:02:06.713 compress/zlib: not in enabled drivers build config 00:02:06.713 regex/*: missing internal dependency, "regexdev" 00:02:06.713 ml/*: missing internal dependency, "mldev" 00:02:06.713 vdpa/ifc: not in enabled drivers build config 00:02:06.713 vdpa/mlx5: not in enabled drivers build config 00:02:06.713 vdpa/nfp: not in enabled drivers build config 00:02:06.713 vdpa/sfc: not in enabled drivers build config 00:02:06.713 event/*: missing internal dependency, "eventdev" 00:02:06.713 baseband/*: missing internal dependency, "bbdev" 00:02:06.713 gpu/*: missing internal dependency, "gpudev" 00:02:06.713 00:02:06.713 00:02:06.974 Build targets in project: 85 00:02:06.974 00:02:06.974 DPDK 24.03.0 00:02:06.974 00:02:06.974 User defined options 00:02:06.974 buildtype : debug 00:02:06.974 default_library : shared 00:02:06.974 libdir : lib 00:02:06.974 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.974 b_sanitize : address 00:02:06.974 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:06.974 c_link_args : 00:02:06.974 cpu_instruction_set: native 00:02:06.974 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:06.974 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:06.974 enable_docs : false 00:02:06.974 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:06.974 enable_kmods : false 00:02:06.974 max_lcores : 128 00:02:06.974 tests : false 00:02:06.974 00:02:06.974 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.543 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:07.543 [1/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:07.543 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:07.543 [3/268] Linking static target lib/librte_kvargs.a 00:02:07.543 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.543 [5/268] Linking static target lib/librte_log.a 00:02:07.544 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:07.803 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:07.803 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:07.803 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:07.803 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:07.803 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.062 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.062 [13/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.062 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.062 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.062 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:08.062 [17/268] Linking static target lib/librte_telemetry.a 00:02:08.322 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.322 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.322 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.322 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.322 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.582 [23/268] Linking target lib/librte_log.so.24.1 00:02:08.582 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.582 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.582 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:08.582 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:08.841 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:08.841 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:08.841 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:08.841 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.841 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.841 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.841 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:08.841 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:09.100 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:09.100 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:09.101 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:09.101 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:09.101 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:09.101 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:09.101 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:09.101 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:09.360 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:09.360 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:09.360 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:09.620 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:09.620 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:09.620 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.620 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.620 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.620 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:09.880 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:09.880 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:09.880 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:09.880 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:10.140 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:10.140 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:10.140 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:10.140 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:10.140 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:10.140 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:10.140 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:10.399 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:10.399 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:10.399 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:10.399 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:10.660 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:10.660 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:10.660 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:10.660 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:10.660 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:10.660 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:10.660 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:10.660 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:10.920 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:10.920 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:10.920 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:10.920 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:10.920 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:11.180 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:11.180 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:11.180 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:11.439 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:11.439 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:11.439 [86/268] Linking static target lib/librte_ring.a 00:02:11.439 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:11.439 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:11.439 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:11.439 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:11.439 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:11.439 [92/268] Linking static target lib/librte_rcu.a 00:02:11.439 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:11.439 [94/268] Linking static target lib/librte_mempool.a 00:02:11.439 [95/268] Linking static target lib/librte_eal.a 00:02:11.699 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:11.699 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:11.959 [98/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.959 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:11.959 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.959 [101/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:11.959 [102/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:11.959 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:11.960 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:11.960 [105/268] Linking static target lib/librte_mbuf.a 00:02:12.219 [106/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:12.219 [107/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:12.219 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:12.219 [109/268] Linking static target lib/librte_meter.a 00:02:12.219 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:12.219 [111/268] Linking static target lib/librte_net.a 00:02:12.479 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:12.479 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:12.479 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.739 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:12.739 [116/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.739 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.739 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:12.999 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:12.999 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.999 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.259 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.259 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.259 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.520 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.520 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.520 [127/268] Linking static target lib/librte_pci.a 00:02:13.520 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:13.520 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.520 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:13.780 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.780 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.780 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:13.780 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.780 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:13.780 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:13.780 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.780 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:13.780 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.780 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:13.780 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:13.780 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:13.780 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.780 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:14.040 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:14.040 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:14.040 [147/268] Linking static target lib/librte_cmdline.a 00:02:14.300 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.300 [149/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.300 [150/268] Linking static target lib/librte_timer.a 00:02:14.300 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.560 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:14.560 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.560 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.820 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.820 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:15.080 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.080 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:15.080 [159/268] Linking static target lib/librte_compressdev.a 00:02:15.080 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:15.080 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:15.080 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:15.080 [163/268] Linking static target lib/librte_hash.a 00:02:15.340 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:15.340 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.340 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:15.340 [167/268] Linking static target lib/librte_dmadev.a 00:02:15.340 [168/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:15.340 [169/268] Linking static target lib/librte_ethdev.a 00:02:15.601 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.601 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.601 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.601 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:15.860 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.860 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.126 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:16.126 [177/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.126 [178/268] Linking static target lib/librte_cryptodev.a 00:02:16.126 [179/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.126 [180/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.126 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.126 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.126 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.126 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.406 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.406 [186/268] Linking static target lib/librte_power.a 00:02:16.692 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.692 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.692 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:16.692 [190/268] Linking static target lib/librte_reorder.a 00:02:16.950 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:16.950 [192/268] Linking static target lib/librte_security.a 00:02:16.950 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:17.209 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.209 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.468 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.468 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.728 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:17.728 [199/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.728 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:17.728 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:17.988 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.988 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:17.988 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.988 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:18.247 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:18.247 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:18.247 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:18.247 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.506 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:18.506 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:18.506 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:18.765 [213/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:18.765 [214/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.765 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:18.765 [216/268] Linking static target drivers/librte_bus_pci.a 00:02:18.765 [217/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.765 [218/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.765 [219/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:18.765 [220/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.765 [221/268] Linking static target drivers/librte_bus_vdev.a 00:02:18.765 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:19.024 [223/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.024 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:19.024 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:19.024 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.283 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.221 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:21.603 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.603 [230/268] Linking target lib/librte_eal.so.24.1 00:02:21.603 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:21.603 [232/268] Linking target lib/librte_meter.so.24.1 00:02:21.603 [233/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:21.603 [234/268] Linking target lib/librte_ring.so.24.1 00:02:21.603 [235/268] Linking target lib/librte_timer.so.24.1 00:02:21.603 [236/268] Linking target lib/librte_pci.so.24.1 00:02:21.603 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:21.863 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:21.863 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:21.863 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:21.863 [241/268] Linking target lib/librte_rcu.so.24.1 00:02:21.863 [242/268] Linking target lib/librte_mempool.so.24.1 00:02:21.863 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:21.863 [244/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:21.863 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:21.863 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:21.863 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:22.122 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:22.122 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:22.122 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:22.122 [251/268] Linking target lib/librte_compressdev.so.24.1 00:02:22.122 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:22.122 [253/268] Linking target lib/librte_net.so.24.1 00:02:22.122 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:02:22.382 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:22.382 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:22.382 [257/268] Linking target lib/librte_hash.so.24.1 00:02:22.382 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:22.382 [259/268] Linking target lib/librte_security.so.24.1 00:02:22.642 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:24.024 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.024 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:24.024 [263/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:24.024 [264/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:24.024 [265/268] Linking static target lib/librte_vhost.a 00:02:24.024 [266/268] Linking target lib/librte_power.so.24.1 00:02:26.564 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.564 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:26.564 INFO: autodetecting backend as ninja 00:02:26.564 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:44.684 CC lib/log/log.o 00:02:44.684 CC lib/log/log_flags.o 00:02:44.684 CC lib/log/log_deprecated.o 00:02:44.684 CC lib/ut/ut.o 00:02:44.684 CC lib/ut_mock/mock.o 00:02:44.684 LIB libspdk_ut.a 00:02:44.684 LIB libspdk_log.a 00:02:44.684 SO libspdk_ut.so.2.0 00:02:44.684 LIB libspdk_ut_mock.a 00:02:44.684 SO libspdk_log.so.7.1 00:02:44.684 SO libspdk_ut_mock.so.6.0 00:02:44.684 SYMLINK libspdk_ut.so 00:02:44.684 SYMLINK libspdk_log.so 00:02:44.684 SYMLINK libspdk_ut_mock.so 00:02:44.943 CC lib/util/bit_array.o 00:02:44.943 CXX lib/trace_parser/trace.o 00:02:44.943 CC lib/util/base64.o 00:02:44.943 CC lib/util/crc16.o 00:02:44.943 CC lib/util/cpuset.o 00:02:44.943 CC lib/util/crc32c.o 00:02:44.943 CC lib/util/crc32.o 00:02:44.943 CC lib/dma/dma.o 00:02:44.943 CC lib/ioat/ioat.o 00:02:44.943 CC lib/util/crc32_ieee.o 00:02:44.943 CC lib/vfio_user/host/vfio_user_pci.o 00:02:44.943 CC lib/util/crc64.o 00:02:45.201 CC lib/util/dif.o 00:02:45.201 CC lib/util/fd.o 00:02:45.201 LIB libspdk_dma.a 00:02:45.201 CC lib/vfio_user/host/vfio_user.o 00:02:45.201 CC lib/util/fd_group.o 00:02:45.201 SO libspdk_dma.so.5.0 00:02:45.201 CC lib/util/file.o 00:02:45.201 CC lib/util/hexlify.o 00:02:45.201 SYMLINK libspdk_dma.so 00:02:45.201 CC lib/util/iov.o 00:02:45.201 LIB libspdk_ioat.a 00:02:45.201 CC lib/util/math.o 00:02:45.201 SO libspdk_ioat.so.7.0 00:02:45.459 SYMLINK libspdk_ioat.so 00:02:45.459 CC lib/util/net.o 00:02:45.459 CC lib/util/pipe.o 00:02:45.459 CC lib/util/strerror_tls.o 00:02:45.459 CC lib/util/string.o 00:02:45.459 LIB libspdk_vfio_user.a 00:02:45.459 SO libspdk_vfio_user.so.5.0 00:02:45.459 CC lib/util/uuid.o 00:02:45.459 CC lib/util/xor.o 00:02:45.459 CC lib/util/zipf.o 00:02:45.459 SYMLINK libspdk_vfio_user.so 00:02:45.459 CC lib/util/md5.o 00:02:45.717 LIB libspdk_util.a 00:02:45.975 SO libspdk_util.so.10.1 00:02:45.975 SYMLINK libspdk_util.so 00:02:46.232 LIB libspdk_trace_parser.a 00:02:46.232 SO libspdk_trace_parser.so.6.0 00:02:46.232 CC lib/rdma_utils/rdma_utils.o 00:02:46.232 CC lib/vmd/vmd.o 00:02:46.232 CC lib/vmd/led.o 00:02:46.232 CC lib/idxd/idxd.o 00:02:46.232 CC lib/idxd/idxd_user.o 00:02:46.232 CC lib/idxd/idxd_kernel.o 00:02:46.232 CC lib/json/json_parse.o 00:02:46.232 CC lib/conf/conf.o 00:02:46.232 CC lib/env_dpdk/env.o 00:02:46.232 SYMLINK libspdk_trace_parser.so 00:02:46.232 CC lib/env_dpdk/memory.o 00:02:46.491 CC lib/env_dpdk/pci.o 00:02:46.491 CC lib/env_dpdk/init.o 00:02:46.491 LIB libspdk_conf.a 00:02:46.491 CC lib/json/json_util.o 00:02:46.491 CC lib/json/json_write.o 00:02:46.491 SO libspdk_conf.so.6.0 00:02:46.491 LIB libspdk_rdma_utils.a 00:02:46.491 SO libspdk_rdma_utils.so.1.0 00:02:46.491 SYMLINK libspdk_conf.so 00:02:46.749 CC lib/env_dpdk/threads.o 00:02:46.749 SYMLINK libspdk_rdma_utils.so 00:02:46.749 CC lib/env_dpdk/pci_ioat.o 00:02:46.749 CC lib/env_dpdk/pci_virtio.o 00:02:46.749 CC lib/env_dpdk/pci_vmd.o 00:02:46.749 CC lib/env_dpdk/pci_idxd.o 00:02:46.749 CC lib/env_dpdk/pci_event.o 00:02:46.749 CC lib/env_dpdk/sigbus_handler.o 00:02:46.749 LIB libspdk_json.a 00:02:47.007 SO libspdk_json.so.6.0 00:02:47.007 CC lib/env_dpdk/pci_dpdk.o 00:02:47.007 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:47.007 SYMLINK libspdk_json.so 00:02:47.007 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:47.007 LIB libspdk_idxd.a 00:02:47.007 CC lib/rdma_provider/common.o 00:02:47.007 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:47.007 SO libspdk_idxd.so.12.1 00:02:47.007 LIB libspdk_vmd.a 00:02:47.007 SO libspdk_vmd.so.6.0 00:02:47.007 CC lib/jsonrpc/jsonrpc_server.o 00:02:47.007 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:47.007 CC lib/jsonrpc/jsonrpc_client.o 00:02:47.266 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:47.266 SYMLINK libspdk_idxd.so 00:02:47.266 SYMLINK libspdk_vmd.so 00:02:47.266 LIB libspdk_rdma_provider.a 00:02:47.266 SO libspdk_rdma_provider.so.7.0 00:02:47.524 LIB libspdk_jsonrpc.a 00:02:47.524 SYMLINK libspdk_rdma_provider.so 00:02:47.524 SO libspdk_jsonrpc.so.6.0 00:02:47.524 SYMLINK libspdk_jsonrpc.so 00:02:48.092 CC lib/rpc/rpc.o 00:02:48.350 LIB libspdk_env_dpdk.a 00:02:48.350 LIB libspdk_rpc.a 00:02:48.350 SO libspdk_rpc.so.6.0 00:02:48.350 SO libspdk_env_dpdk.so.15.1 00:02:48.350 SYMLINK libspdk_rpc.so 00:02:48.609 SYMLINK libspdk_env_dpdk.so 00:02:48.609 CC lib/keyring/keyring.o 00:02:48.609 CC lib/keyring/keyring_rpc.o 00:02:48.609 CC lib/notify/notify_rpc.o 00:02:48.609 CC lib/notify/notify.o 00:02:48.609 CC lib/trace/trace.o 00:02:48.609 CC lib/trace/trace_flags.o 00:02:48.609 CC lib/trace/trace_rpc.o 00:02:48.869 LIB libspdk_notify.a 00:02:48.869 SO libspdk_notify.so.6.0 00:02:48.869 LIB libspdk_keyring.a 00:02:49.127 SYMLINK libspdk_notify.so 00:02:49.127 SO libspdk_keyring.so.2.0 00:02:49.127 LIB libspdk_trace.a 00:02:49.127 SO libspdk_trace.so.11.0 00:02:49.127 SYMLINK libspdk_keyring.so 00:02:49.127 SYMLINK libspdk_trace.so 00:02:49.694 CC lib/sock/sock.o 00:02:49.694 CC lib/sock/sock_rpc.o 00:02:49.694 CC lib/thread/thread.o 00:02:49.694 CC lib/thread/iobuf.o 00:02:49.952 LIB libspdk_sock.a 00:02:50.211 SO libspdk_sock.so.10.0 00:02:50.211 SYMLINK libspdk_sock.so 00:02:50.469 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:50.469 CC lib/nvme/nvme_ctrlr.o 00:02:50.469 CC lib/nvme/nvme_fabric.o 00:02:50.469 CC lib/nvme/nvme_ns_cmd.o 00:02:50.469 CC lib/nvme/nvme_ns.o 00:02:50.469 CC lib/nvme/nvme_pcie_common.o 00:02:50.727 CC lib/nvme/nvme_pcie.o 00:02:50.727 CC lib/nvme/nvme.o 00:02:50.727 CC lib/nvme/nvme_qpair.o 00:02:51.292 CC lib/nvme/nvme_quirks.o 00:02:51.292 LIB libspdk_thread.a 00:02:51.292 CC lib/nvme/nvme_transport.o 00:02:51.292 SO libspdk_thread.so.11.0 00:02:51.292 CC lib/nvme/nvme_discovery.o 00:02:51.550 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:51.550 SYMLINK libspdk_thread.so 00:02:51.550 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:51.550 CC lib/nvme/nvme_tcp.o 00:02:51.550 CC lib/nvme/nvme_opal.o 00:02:51.550 CC lib/nvme/nvme_io_msg.o 00:02:51.808 CC lib/nvme/nvme_poll_group.o 00:02:51.808 CC lib/nvme/nvme_zns.o 00:02:52.066 CC lib/nvme/nvme_stubs.o 00:02:52.066 CC lib/nvme/nvme_auth.o 00:02:52.329 CC lib/accel/accel.o 00:02:52.329 CC lib/nvme/nvme_cuse.o 00:02:52.329 CC lib/blob/blobstore.o 00:02:52.329 CC lib/init/json_config.o 00:02:52.329 CC lib/blob/request.o 00:02:52.329 CC lib/blob/zeroes.o 00:02:52.607 CC lib/blob/blob_bs_dev.o 00:02:52.607 CC lib/init/subsystem.o 00:02:52.607 CC lib/accel/accel_rpc.o 00:02:52.886 CC lib/init/subsystem_rpc.o 00:02:52.886 CC lib/init/rpc.o 00:02:52.886 CC lib/virtio/virtio.o 00:02:52.886 CC lib/nvme/nvme_rdma.o 00:02:52.886 CC lib/fsdev/fsdev.o 00:02:52.886 CC lib/accel/accel_sw.o 00:02:53.154 LIB libspdk_init.a 00:02:53.154 SO libspdk_init.so.6.0 00:02:53.154 SYMLINK libspdk_init.so 00:02:53.154 CC lib/fsdev/fsdev_io.o 00:02:53.154 CC lib/fsdev/fsdev_rpc.o 00:02:53.154 CC lib/virtio/virtio_vhost_user.o 00:02:53.413 CC lib/virtio/virtio_vfio_user.o 00:02:53.413 CC lib/virtio/virtio_pci.o 00:02:53.413 CC lib/event/app.o 00:02:53.413 CC lib/event/reactor.o 00:02:53.413 LIB libspdk_accel.a 00:02:53.671 CC lib/event/log_rpc.o 00:02:53.671 CC lib/event/app_rpc.o 00:02:53.671 SO libspdk_accel.so.16.0 00:02:53.671 SYMLINK libspdk_accel.so 00:02:53.671 CC lib/event/scheduler_static.o 00:02:53.671 LIB libspdk_virtio.a 00:02:53.671 SO libspdk_virtio.so.7.0 00:02:53.671 LIB libspdk_fsdev.a 00:02:53.671 SO libspdk_fsdev.so.2.0 00:02:53.929 SYMLINK libspdk_virtio.so 00:02:53.929 CC lib/bdev/bdev.o 00:02:53.929 CC lib/bdev/bdev_rpc.o 00:02:53.929 CC lib/bdev/scsi_nvme.o 00:02:53.929 CC lib/bdev/part.o 00:02:53.929 CC lib/bdev/bdev_zone.o 00:02:53.929 SYMLINK libspdk_fsdev.so 00:02:54.186 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:54.186 LIB libspdk_event.a 00:02:54.186 SO libspdk_event.so.14.0 00:02:54.186 SYMLINK libspdk_event.so 00:02:54.443 LIB libspdk_nvme.a 00:02:54.701 SO libspdk_nvme.so.15.0 00:02:54.701 LIB libspdk_fuse_dispatcher.a 00:02:54.701 SO libspdk_fuse_dispatcher.so.1.0 00:02:54.959 SYMLINK libspdk_fuse_dispatcher.so 00:02:54.959 SYMLINK libspdk_nvme.so 00:02:56.861 LIB libspdk_blob.a 00:02:56.861 SO libspdk_blob.so.11.0 00:02:56.861 SYMLINK libspdk_blob.so 00:02:57.119 CC lib/lvol/lvol.o 00:02:57.119 CC lib/blobfs/tree.o 00:02:57.119 CC lib/blobfs/blobfs.o 00:02:57.119 LIB libspdk_bdev.a 00:02:57.378 SO libspdk_bdev.so.17.0 00:02:57.378 SYMLINK libspdk_bdev.so 00:02:57.636 CC lib/ftl/ftl_core.o 00:02:57.636 CC lib/ftl/ftl_init.o 00:02:57.636 CC lib/ftl/ftl_debug.o 00:02:57.637 CC lib/ftl/ftl_layout.o 00:02:57.637 CC lib/scsi/dev.o 00:02:57.637 CC lib/nbd/nbd.o 00:02:57.637 CC lib/ublk/ublk.o 00:02:57.897 CC lib/nvmf/ctrlr.o 00:02:57.897 CC lib/scsi/lun.o 00:02:57.897 CC lib/scsi/port.o 00:02:57.897 CC lib/nvmf/ctrlr_discovery.o 00:02:58.156 CC lib/nbd/nbd_rpc.o 00:02:58.156 LIB libspdk_blobfs.a 00:02:58.156 LIB libspdk_lvol.a 00:02:58.156 CC lib/scsi/scsi.o 00:02:58.156 SO libspdk_blobfs.so.10.0 00:02:58.156 SO libspdk_lvol.so.10.0 00:02:58.156 CC lib/ftl/ftl_io.o 00:02:58.156 CC lib/scsi/scsi_bdev.o 00:02:58.156 SYMLINK libspdk_lvol.so 00:02:58.156 CC lib/scsi/scsi_pr.o 00:02:58.415 SYMLINK libspdk_blobfs.so 00:02:58.415 CC lib/scsi/scsi_rpc.o 00:02:58.415 CC lib/scsi/task.o 00:02:58.415 LIB libspdk_nbd.a 00:02:58.415 CC lib/ublk/ublk_rpc.o 00:02:58.415 SO libspdk_nbd.so.7.0 00:02:58.415 SYMLINK libspdk_nbd.so 00:02:58.415 CC lib/nvmf/ctrlr_bdev.o 00:02:58.415 CC lib/nvmf/subsystem.o 00:02:58.415 CC lib/ftl/ftl_sb.o 00:02:58.415 CC lib/ftl/ftl_l2p.o 00:02:58.673 LIB libspdk_ublk.a 00:02:58.673 CC lib/ftl/ftl_l2p_flat.o 00:02:58.673 SO libspdk_ublk.so.3.0 00:02:58.673 CC lib/nvmf/nvmf.o 00:02:58.673 CC lib/nvmf/nvmf_rpc.o 00:02:58.673 SYMLINK libspdk_ublk.so 00:02:58.673 CC lib/nvmf/transport.o 00:02:58.673 CC lib/ftl/ftl_nv_cache.o 00:02:58.673 CC lib/ftl/ftl_band.o 00:02:58.932 CC lib/ftl/ftl_band_ops.o 00:02:58.932 LIB libspdk_scsi.a 00:02:58.932 SO libspdk_scsi.so.9.0 00:02:59.190 SYMLINK libspdk_scsi.so 00:02:59.190 CC lib/nvmf/tcp.o 00:02:59.190 CC lib/ftl/ftl_writer.o 00:02:59.190 CC lib/ftl/ftl_rq.o 00:02:59.190 CC lib/nvmf/stubs.o 00:02:59.758 CC lib/nvmf/mdns_server.o 00:02:59.758 CC lib/iscsi/conn.o 00:02:59.758 CC lib/vhost/vhost.o 00:02:59.758 CC lib/vhost/vhost_rpc.o 00:02:59.758 CC lib/iscsi/init_grp.o 00:02:59.758 CC lib/nvmf/rdma.o 00:02:59.758 CC lib/ftl/ftl_reloc.o 00:03:00.016 CC lib/nvmf/auth.o 00:03:00.016 CC lib/ftl/ftl_l2p_cache.o 00:03:00.016 CC lib/iscsi/iscsi.o 00:03:00.274 CC lib/iscsi/param.o 00:03:00.274 CC lib/iscsi/portal_grp.o 00:03:00.274 CC lib/vhost/vhost_scsi.o 00:03:00.557 CC lib/vhost/vhost_blk.o 00:03:00.557 CC lib/iscsi/tgt_node.o 00:03:00.557 CC lib/iscsi/iscsi_subsystem.o 00:03:00.557 CC lib/vhost/rte_vhost_user.o 00:03:00.557 CC lib/ftl/ftl_p2l.o 00:03:00.816 CC lib/ftl/ftl_p2l_log.o 00:03:01.075 CC lib/iscsi/iscsi_rpc.o 00:03:01.075 CC lib/ftl/mngt/ftl_mngt.o 00:03:01.075 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:01.334 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:01.334 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:01.334 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:01.334 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:01.334 CC lib/iscsi/task.o 00:03:01.593 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:01.593 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:01.593 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:01.593 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:01.593 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:01.593 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:01.593 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:01.593 CC lib/ftl/utils/ftl_conf.o 00:03:01.852 CC lib/ftl/utils/ftl_md.o 00:03:01.852 LIB libspdk_iscsi.a 00:03:01.852 CC lib/ftl/utils/ftl_mempool.o 00:03:01.852 LIB libspdk_vhost.a 00:03:01.852 SO libspdk_iscsi.so.8.0 00:03:01.852 CC lib/ftl/utils/ftl_bitmap.o 00:03:01.852 SO libspdk_vhost.so.8.0 00:03:01.852 CC lib/ftl/utils/ftl_property.o 00:03:01.852 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:01.852 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:02.110 SYMLINK libspdk_vhost.so 00:03:02.110 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:02.110 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:02.111 SYMLINK libspdk_iscsi.so 00:03:02.111 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:02.111 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:02.111 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:02.111 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:02.111 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:02.111 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:02.369 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:02.369 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:02.369 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:02.369 CC lib/ftl/base/ftl_base_dev.o 00:03:02.369 CC lib/ftl/base/ftl_base_bdev.o 00:03:02.369 CC lib/ftl/ftl_trace.o 00:03:02.628 LIB libspdk_ftl.a 00:03:02.887 LIB libspdk_nvmf.a 00:03:02.887 SO libspdk_ftl.so.9.0 00:03:02.887 SO libspdk_nvmf.so.20.0 00:03:03.146 SYMLINK libspdk_ftl.so 00:03:03.146 SYMLINK libspdk_nvmf.so 00:03:03.713 CC module/env_dpdk/env_dpdk_rpc.o 00:03:03.713 CC module/keyring/file/keyring.o 00:03:03.713 CC module/keyring/linux/keyring.o 00:03:03.713 CC module/blob/bdev/blob_bdev.o 00:03:03.713 CC module/accel/error/accel_error.o 00:03:03.713 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:03.713 CC module/scheduler/gscheduler/gscheduler.o 00:03:03.713 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:03.713 CC module/sock/posix/posix.o 00:03:03.713 CC module/fsdev/aio/fsdev_aio.o 00:03:03.971 LIB libspdk_env_dpdk_rpc.a 00:03:03.971 CC module/keyring/linux/keyring_rpc.o 00:03:03.971 SO libspdk_env_dpdk_rpc.so.6.0 00:03:03.971 LIB libspdk_scheduler_dpdk_governor.a 00:03:03.971 LIB libspdk_scheduler_gscheduler.a 00:03:03.971 CC module/keyring/file/keyring_rpc.o 00:03:03.971 SO libspdk_scheduler_gscheduler.so.4.0 00:03:03.971 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:03.971 SYMLINK libspdk_env_dpdk_rpc.so 00:03:03.971 LIB libspdk_scheduler_dynamic.a 00:03:03.971 CC module/accel/error/accel_error_rpc.o 00:03:03.971 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:03.971 SO libspdk_scheduler_dynamic.so.4.0 00:03:03.971 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:03.971 SYMLINK libspdk_scheduler_gscheduler.so 00:03:03.972 CC module/fsdev/aio/linux_aio_mgr.o 00:03:03.972 LIB libspdk_keyring_linux.a 00:03:04.230 SYMLINK libspdk_scheduler_dynamic.so 00:03:04.230 LIB libspdk_blob_bdev.a 00:03:04.230 SO libspdk_keyring_linux.so.1.0 00:03:04.230 LIB libspdk_keyring_file.a 00:03:04.230 SO libspdk_blob_bdev.so.11.0 00:03:04.230 SO libspdk_keyring_file.so.2.0 00:03:04.230 LIB libspdk_accel_error.a 00:03:04.230 SYMLINK libspdk_keyring_linux.so 00:03:04.230 SO libspdk_accel_error.so.2.0 00:03:04.230 SYMLINK libspdk_blob_bdev.so 00:03:04.230 SYMLINK libspdk_keyring_file.so 00:03:04.230 CC module/accel/ioat/accel_ioat.o 00:03:04.230 CC module/accel/ioat/accel_ioat_rpc.o 00:03:04.230 SYMLINK libspdk_accel_error.so 00:03:04.230 CC module/accel/dsa/accel_dsa.o 00:03:04.230 CC module/accel/dsa/accel_dsa_rpc.o 00:03:04.488 CC module/accel/iaa/accel_iaa.o 00:03:04.488 CC module/accel/iaa/accel_iaa_rpc.o 00:03:04.488 LIB libspdk_accel_ioat.a 00:03:04.488 CC module/bdev/error/vbdev_error.o 00:03:04.488 CC module/bdev/delay/vbdev_delay.o 00:03:04.488 SO libspdk_accel_ioat.so.6.0 00:03:04.488 CC module/blobfs/bdev/blobfs_bdev.o 00:03:04.488 SYMLINK libspdk_accel_ioat.so 00:03:04.488 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:04.488 CC module/bdev/error/vbdev_error_rpc.o 00:03:04.488 CC module/bdev/gpt/gpt.o 00:03:04.488 LIB libspdk_accel_dsa.a 00:03:04.746 LIB libspdk_fsdev_aio.a 00:03:04.746 LIB libspdk_accel_iaa.a 00:03:04.746 SO libspdk_accel_dsa.so.5.0 00:03:04.746 SO libspdk_accel_iaa.so.3.0 00:03:04.746 SO libspdk_fsdev_aio.so.1.0 00:03:04.746 CC module/bdev/gpt/vbdev_gpt.o 00:03:04.746 LIB libspdk_sock_posix.a 00:03:04.746 SYMLINK libspdk_accel_dsa.so 00:03:04.746 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:04.746 SYMLINK libspdk_fsdev_aio.so 00:03:04.746 SYMLINK libspdk_accel_iaa.so 00:03:04.746 LIB libspdk_blobfs_bdev.a 00:03:04.746 SO libspdk_sock_posix.so.6.0 00:03:04.746 SO libspdk_blobfs_bdev.so.6.0 00:03:04.746 LIB libspdk_bdev_error.a 00:03:04.746 SO libspdk_bdev_error.so.6.0 00:03:04.746 SYMLINK libspdk_sock_posix.so 00:03:04.746 SYMLINK libspdk_blobfs_bdev.so 00:03:05.005 SYMLINK libspdk_bdev_error.so 00:03:05.005 LIB libspdk_bdev_delay.a 00:03:05.005 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.005 CC module/bdev/malloc/bdev_malloc.o 00:03:05.005 CC module/bdev/null/bdev_null.o 00:03:05.005 SO libspdk_bdev_delay.so.6.0 00:03:05.005 LIB libspdk_bdev_gpt.a 00:03:05.005 CC module/bdev/nvme/bdev_nvme.o 00:03:05.005 SO libspdk_bdev_gpt.so.6.0 00:03:05.005 CC module/bdev/passthru/vbdev_passthru.o 00:03:05.005 CC module/bdev/raid/bdev_raid.o 00:03:05.005 SYMLINK libspdk_bdev_delay.so 00:03:05.005 CC module/bdev/split/vbdev_split.o 00:03:05.005 CC module/bdev/null/bdev_null_rpc.o 00:03:05.005 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:05.005 SYMLINK libspdk_bdev_gpt.so 00:03:05.005 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:05.263 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:05.263 LIB libspdk_bdev_null.a 00:03:05.263 CC module/bdev/raid/bdev_raid_rpc.o 00:03:05.263 CC module/bdev/split/vbdev_split_rpc.o 00:03:05.263 SO libspdk_bdev_null.so.6.0 00:03:05.522 LIB libspdk_bdev_passthru.a 00:03:05.522 SYMLINK libspdk_bdev_null.so 00:03:05.522 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:05.522 SO libspdk_bdev_passthru.so.6.0 00:03:05.522 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:05.522 LIB libspdk_bdev_zone_block.a 00:03:05.522 LIB libspdk_bdev_split.a 00:03:05.522 SO libspdk_bdev_zone_block.so.6.0 00:03:05.522 SYMLINK libspdk_bdev_passthru.so 00:03:05.522 SO libspdk_bdev_split.so.6.0 00:03:05.522 CC module/bdev/aio/bdev_aio.o 00:03:05.522 SYMLINK libspdk_bdev_zone_block.so 00:03:05.522 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.522 LIB libspdk_bdev_malloc.a 00:03:05.522 SYMLINK libspdk_bdev_split.so 00:03:05.522 CC module/bdev/raid/bdev_raid_sb.o 00:03:05.522 SO libspdk_bdev_malloc.so.6.0 00:03:05.781 CC module/bdev/ftl/bdev_ftl.o 00:03:05.781 SYMLINK libspdk_bdev_malloc.so 00:03:05.781 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:05.781 CC module/bdev/iscsi/bdev_iscsi.o 00:03:05.781 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:05.781 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:05.781 LIB libspdk_bdev_lvol.a 00:03:05.781 SO libspdk_bdev_lvol.so.6.0 00:03:06.039 CC module/bdev/raid/raid0.o 00:03:06.039 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:06.039 CC module/bdev/nvme/nvme_rpc.o 00:03:06.039 SYMLINK libspdk_bdev_lvol.so 00:03:06.039 CC module/bdev/nvme/bdev_mdns_client.o 00:03:06.039 LIB libspdk_bdev_aio.a 00:03:06.039 SO libspdk_bdev_aio.so.6.0 00:03:06.039 LIB libspdk_bdev_ftl.a 00:03:06.039 SO libspdk_bdev_ftl.so.6.0 00:03:06.039 SYMLINK libspdk_bdev_aio.so 00:03:06.039 CC module/bdev/nvme/vbdev_opal.o 00:03:06.039 SYMLINK libspdk_bdev_ftl.so 00:03:06.039 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:06.039 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:06.039 LIB libspdk_bdev_iscsi.a 00:03:06.298 SO libspdk_bdev_iscsi.so.6.0 00:03:06.298 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:06.298 CC module/bdev/raid/raid1.o 00:03:06.298 SYMLINK libspdk_bdev_iscsi.so 00:03:06.298 CC module/bdev/raid/concat.o 00:03:06.298 CC module/bdev/raid/raid5f.o 00:03:06.298 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:06.557 LIB libspdk_bdev_virtio.a 00:03:06.557 SO libspdk_bdev_virtio.so.6.0 00:03:06.557 SYMLINK libspdk_bdev_virtio.so 00:03:06.819 LIB libspdk_bdev_raid.a 00:03:07.086 SO libspdk_bdev_raid.so.6.0 00:03:07.086 SYMLINK libspdk_bdev_raid.so 00:03:08.024 LIB libspdk_bdev_nvme.a 00:03:08.282 SO libspdk_bdev_nvme.so.7.1 00:03:08.282 SYMLINK libspdk_bdev_nvme.so 00:03:08.849 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:08.849 CC module/event/subsystems/scheduler/scheduler.o 00:03:08.849 CC module/event/subsystems/sock/sock.o 00:03:08.849 CC module/event/subsystems/iobuf/iobuf.o 00:03:08.849 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:08.849 CC module/event/subsystems/keyring/keyring.o 00:03:08.849 CC module/event/subsystems/vmd/vmd.o 00:03:08.849 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:08.849 CC module/event/subsystems/fsdev/fsdev.o 00:03:09.108 LIB libspdk_event_keyring.a 00:03:09.108 LIB libspdk_event_vhost_blk.a 00:03:09.108 LIB libspdk_event_vmd.a 00:03:09.108 LIB libspdk_event_sock.a 00:03:09.108 LIB libspdk_event_scheduler.a 00:03:09.108 LIB libspdk_event_fsdev.a 00:03:09.108 SO libspdk_event_keyring.so.1.0 00:03:09.108 LIB libspdk_event_iobuf.a 00:03:09.108 SO libspdk_event_sock.so.5.0 00:03:09.108 SO libspdk_event_vhost_blk.so.3.0 00:03:09.108 SO libspdk_event_vmd.so.6.0 00:03:09.108 SO libspdk_event_fsdev.so.1.0 00:03:09.108 SO libspdk_event_scheduler.so.4.0 00:03:09.108 SO libspdk_event_iobuf.so.3.0 00:03:09.108 SYMLINK libspdk_event_keyring.so 00:03:09.108 SYMLINK libspdk_event_sock.so 00:03:09.108 SYMLINK libspdk_event_fsdev.so 00:03:09.108 SYMLINK libspdk_event_vhost_blk.so 00:03:09.108 SYMLINK libspdk_event_scheduler.so 00:03:09.108 SYMLINK libspdk_event_vmd.so 00:03:09.108 SYMLINK libspdk_event_iobuf.so 00:03:09.676 CC module/event/subsystems/accel/accel.o 00:03:09.676 LIB libspdk_event_accel.a 00:03:09.676 SO libspdk_event_accel.so.6.0 00:03:09.936 SYMLINK libspdk_event_accel.so 00:03:10.194 CC module/event/subsystems/bdev/bdev.o 00:03:10.454 LIB libspdk_event_bdev.a 00:03:10.454 SO libspdk_event_bdev.so.6.0 00:03:10.454 SYMLINK libspdk_event_bdev.so 00:03:11.023 CC module/event/subsystems/ublk/ublk.o 00:03:11.023 CC module/event/subsystems/scsi/scsi.o 00:03:11.023 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:11.023 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.023 CC module/event/subsystems/nbd/nbd.o 00:03:11.023 LIB libspdk_event_ublk.a 00:03:11.023 LIB libspdk_event_scsi.a 00:03:11.023 LIB libspdk_event_nbd.a 00:03:11.023 SO libspdk_event_ublk.so.3.0 00:03:11.023 SO libspdk_event_nbd.so.6.0 00:03:11.023 SO libspdk_event_scsi.so.6.0 00:03:11.023 LIB libspdk_event_nvmf.a 00:03:11.023 SYMLINK libspdk_event_ublk.so 00:03:11.023 SYMLINK libspdk_event_nbd.so 00:03:11.023 SYMLINK libspdk_event_scsi.so 00:03:11.023 SO libspdk_event_nvmf.so.6.0 00:03:11.283 SYMLINK libspdk_event_nvmf.so 00:03:11.541 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.541 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.541 LIB libspdk_event_vhost_scsi.a 00:03:11.800 LIB libspdk_event_iscsi.a 00:03:11.800 SO libspdk_event_vhost_scsi.so.3.0 00:03:11.800 SO libspdk_event_iscsi.so.6.0 00:03:11.800 SYMLINK libspdk_event_vhost_scsi.so 00:03:11.800 SYMLINK libspdk_event_iscsi.so 00:03:12.058 SO libspdk.so.6.0 00:03:12.058 SYMLINK libspdk.so 00:03:12.316 CC app/trace_record/trace_record.o 00:03:12.316 CC app/spdk_nvme_perf/perf.o 00:03:12.316 CXX app/trace/trace.o 00:03:12.316 CC app/spdk_nvme_identify/identify.o 00:03:12.316 CC app/spdk_lspci/spdk_lspci.o 00:03:12.316 CC app/nvmf_tgt/nvmf_main.o 00:03:12.316 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.316 CC app/spdk_tgt/spdk_tgt.o 00:03:12.316 CC examples/util/zipf/zipf.o 00:03:12.575 CC test/thread/poller_perf/poller_perf.o 00:03:12.575 LINK spdk_lspci 00:03:12.575 LINK nvmf_tgt 00:03:12.575 LINK iscsi_tgt 00:03:12.575 LINK zipf 00:03:12.575 LINK spdk_trace_record 00:03:12.575 LINK poller_perf 00:03:12.575 LINK spdk_tgt 00:03:12.833 LINK spdk_trace 00:03:12.833 CC examples/ioat/perf/perf.o 00:03:12.833 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.833 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.833 CC app/spdk_top/spdk_top.o 00:03:13.092 CC examples/thread/thread/thread_ex.o 00:03:13.092 CC app/spdk_dd/spdk_dd.o 00:03:13.092 CC test/dma/test_dma/test_dma.o 00:03:13.092 LINK interrupt_tgt 00:03:13.092 CC examples/sock/hello_world/hello_sock.o 00:03:13.092 LINK spdk_nvme_discover 00:03:13.092 LINK ioat_perf 00:03:13.350 LINK thread 00:03:13.350 LINK spdk_nvme_perf 00:03:13.350 CC examples/ioat/verify/verify.o 00:03:13.350 LINK spdk_nvme_identify 00:03:13.350 LINK hello_sock 00:03:13.350 LINK spdk_dd 00:03:13.350 CC examples/vmd/lsvmd/lsvmd.o 00:03:13.350 CC examples/idxd/perf/perf.o 00:03:13.609 CC examples/vmd/led/led.o 00:03:13.609 LINK lsvmd 00:03:13.609 LINK verify 00:03:13.609 LINK test_dma 00:03:13.609 CC examples/accel/perf/accel_perf.o 00:03:13.609 LINK led 00:03:13.609 CC examples/blob/hello_world/hello_blob.o 00:03:13.609 CC examples/blob/cli/blobcli.o 00:03:13.609 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:13.868 LINK idxd_perf 00:03:13.869 CC examples/nvme/hello_world/hello_world.o 00:03:13.869 CC app/fio/nvme/fio_plugin.o 00:03:13.869 LINK spdk_top 00:03:13.869 LINK hello_blob 00:03:13.869 CC test/app/bdev_svc/bdev_svc.o 00:03:14.127 LINK hello_fsdev 00:03:14.127 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:14.127 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:14.127 LINK hello_world 00:03:14.127 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:14.127 LINK bdev_svc 00:03:14.127 LINK accel_perf 00:03:14.127 CC test/app/histogram_perf/histogram_perf.o 00:03:14.127 LINK blobcli 00:03:14.127 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:14.386 CC test/app/jsoncat/jsoncat.o 00:03:14.386 LINK histogram_perf 00:03:14.386 CC examples/nvme/reconnect/reconnect.o 00:03:14.386 CC test/app/stub/stub.o 00:03:14.386 LINK jsoncat 00:03:14.386 CC app/fio/bdev/fio_plugin.o 00:03:14.386 LINK spdk_nvme 00:03:14.386 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.386 LINK nvme_fuzz 00:03:14.653 LINK stub 00:03:14.653 CC test/blobfs/mkfs/mkfs.o 00:03:14.653 TEST_HEADER include/spdk/accel.h 00:03:14.653 CC app/vhost/vhost.o 00:03:14.653 TEST_HEADER include/spdk/accel_module.h 00:03:14.653 TEST_HEADER include/spdk/assert.h 00:03:14.653 TEST_HEADER include/spdk/barrier.h 00:03:14.653 TEST_HEADER include/spdk/base64.h 00:03:14.653 TEST_HEADER include/spdk/bdev.h 00:03:14.653 TEST_HEADER include/spdk/bdev_module.h 00:03:14.653 TEST_HEADER include/spdk/bdev_zone.h 00:03:14.653 TEST_HEADER include/spdk/bit_array.h 00:03:14.653 TEST_HEADER include/spdk/bit_pool.h 00:03:14.653 TEST_HEADER include/spdk/blob_bdev.h 00:03:14.653 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:14.653 TEST_HEADER include/spdk/blobfs.h 00:03:14.653 TEST_HEADER include/spdk/blob.h 00:03:14.653 TEST_HEADER include/spdk/conf.h 00:03:14.653 TEST_HEADER include/spdk/config.h 00:03:14.653 TEST_HEADER include/spdk/cpuset.h 00:03:14.653 TEST_HEADER include/spdk/crc16.h 00:03:14.653 TEST_HEADER include/spdk/crc32.h 00:03:14.653 TEST_HEADER include/spdk/crc64.h 00:03:14.653 TEST_HEADER include/spdk/dif.h 00:03:14.653 TEST_HEADER include/spdk/dma.h 00:03:14.653 TEST_HEADER include/spdk/endian.h 00:03:14.653 LINK vhost_fuzz 00:03:14.653 TEST_HEADER include/spdk/env_dpdk.h 00:03:14.653 TEST_HEADER include/spdk/env.h 00:03:14.653 TEST_HEADER include/spdk/event.h 00:03:14.653 TEST_HEADER include/spdk/fd_group.h 00:03:14.653 TEST_HEADER include/spdk/fd.h 00:03:14.653 LINK reconnect 00:03:14.653 TEST_HEADER include/spdk/file.h 00:03:14.653 TEST_HEADER include/spdk/fsdev.h 00:03:14.653 TEST_HEADER include/spdk/fsdev_module.h 00:03:14.653 TEST_HEADER include/spdk/ftl.h 00:03:14.653 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:14.653 TEST_HEADER include/spdk/gpt_spec.h 00:03:14.653 TEST_HEADER include/spdk/hexlify.h 00:03:14.653 TEST_HEADER include/spdk/histogram_data.h 00:03:14.653 TEST_HEADER include/spdk/idxd.h 00:03:14.653 TEST_HEADER include/spdk/idxd_spec.h 00:03:14.653 TEST_HEADER include/spdk/init.h 00:03:14.653 TEST_HEADER include/spdk/ioat.h 00:03:14.653 TEST_HEADER include/spdk/ioat_spec.h 00:03:14.653 TEST_HEADER include/spdk/iscsi_spec.h 00:03:14.653 TEST_HEADER include/spdk/json.h 00:03:14.653 TEST_HEADER include/spdk/jsonrpc.h 00:03:14.653 TEST_HEADER include/spdk/keyring.h 00:03:14.653 TEST_HEADER include/spdk/keyring_module.h 00:03:14.653 TEST_HEADER include/spdk/likely.h 00:03:14.653 TEST_HEADER include/spdk/log.h 00:03:14.653 TEST_HEADER include/spdk/lvol.h 00:03:14.653 TEST_HEADER include/spdk/md5.h 00:03:14.653 TEST_HEADER include/spdk/memory.h 00:03:14.653 TEST_HEADER include/spdk/mmio.h 00:03:14.653 TEST_HEADER include/spdk/nbd.h 00:03:14.653 TEST_HEADER include/spdk/net.h 00:03:14.653 TEST_HEADER include/spdk/notify.h 00:03:14.653 TEST_HEADER include/spdk/nvme.h 00:03:14.653 TEST_HEADER include/spdk/nvme_intel.h 00:03:14.653 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:14.653 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:14.653 TEST_HEADER include/spdk/nvme_spec.h 00:03:14.653 TEST_HEADER include/spdk/nvme_zns.h 00:03:14.653 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:14.653 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:14.653 TEST_HEADER include/spdk/nvmf.h 00:03:14.653 TEST_HEADER include/spdk/nvmf_spec.h 00:03:14.653 TEST_HEADER include/spdk/nvmf_transport.h 00:03:14.653 TEST_HEADER include/spdk/opal.h 00:03:14.653 TEST_HEADER include/spdk/opal_spec.h 00:03:14.653 TEST_HEADER include/spdk/pci_ids.h 00:03:14.653 TEST_HEADER include/spdk/pipe.h 00:03:14.653 TEST_HEADER include/spdk/queue.h 00:03:14.653 TEST_HEADER include/spdk/reduce.h 00:03:14.653 TEST_HEADER include/spdk/rpc.h 00:03:14.653 TEST_HEADER include/spdk/scheduler.h 00:03:14.653 TEST_HEADER include/spdk/scsi.h 00:03:14.653 TEST_HEADER include/spdk/scsi_spec.h 00:03:14.653 TEST_HEADER include/spdk/sock.h 00:03:14.653 TEST_HEADER include/spdk/stdinc.h 00:03:14.653 TEST_HEADER include/spdk/string.h 00:03:14.653 TEST_HEADER include/spdk/thread.h 00:03:14.653 TEST_HEADER include/spdk/trace.h 00:03:14.653 TEST_HEADER include/spdk/trace_parser.h 00:03:14.653 TEST_HEADER include/spdk/tree.h 00:03:14.653 TEST_HEADER include/spdk/ublk.h 00:03:14.653 TEST_HEADER include/spdk/util.h 00:03:14.653 TEST_HEADER include/spdk/uuid.h 00:03:14.653 TEST_HEADER include/spdk/version.h 00:03:14.653 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:14.653 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:14.653 TEST_HEADER include/spdk/vhost.h 00:03:14.924 TEST_HEADER include/spdk/vmd.h 00:03:14.924 TEST_HEADER include/spdk/xor.h 00:03:14.924 TEST_HEADER include/spdk/zipf.h 00:03:14.924 LINK vhost 00:03:14.924 CXX test/cpp_headers/accel.o 00:03:14.924 LINK mkfs 00:03:14.924 CXX test/cpp_headers/accel_module.o 00:03:14.924 CC examples/bdev/hello_world/hello_bdev.o 00:03:14.924 CC examples/bdev/bdevperf/bdevperf.o 00:03:14.924 CXX test/cpp_headers/assert.o 00:03:14.924 LINK spdk_bdev 00:03:14.924 CXX test/cpp_headers/barrier.o 00:03:14.924 LINK nvme_manage 00:03:14.924 CXX test/cpp_headers/base64.o 00:03:14.924 CXX test/cpp_headers/bdev.o 00:03:15.184 LINK hello_bdev 00:03:15.184 CXX test/cpp_headers/bdev_module.o 00:03:15.184 CC test/event/event_perf/event_perf.o 00:03:15.184 CC test/env/vtophys/vtophys.o 00:03:15.184 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:15.184 CC examples/nvme/arbitration/arbitration.o 00:03:15.184 CC test/env/mem_callbacks/mem_callbacks.o 00:03:15.184 CC examples/nvme/hotplug/hotplug.o 00:03:15.184 CXX test/cpp_headers/bdev_zone.o 00:03:15.184 LINK event_perf 00:03:15.443 CC test/env/memory/memory_ut.o 00:03:15.443 LINK vtophys 00:03:15.443 LINK env_dpdk_post_init 00:03:15.443 CXX test/cpp_headers/bit_array.o 00:03:15.443 LINK hotplug 00:03:15.443 CXX test/cpp_headers/bit_pool.o 00:03:15.443 CC test/event/reactor/reactor.o 00:03:15.443 CXX test/cpp_headers/blob_bdev.o 00:03:15.702 LINK arbitration 00:03:15.702 CC test/event/reactor_perf/reactor_perf.o 00:03:15.702 LINK reactor 00:03:15.702 CXX test/cpp_headers/blobfs_bdev.o 00:03:15.702 LINK bdevperf 00:03:15.702 CC test/event/app_repeat/app_repeat.o 00:03:15.702 CC test/event/scheduler/scheduler.o 00:03:15.702 LINK mem_callbacks 00:03:15.702 LINK reactor_perf 00:03:15.702 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:15.961 LINK iscsi_fuzz 00:03:15.961 CXX test/cpp_headers/blobfs.o 00:03:15.961 LINK app_repeat 00:03:15.961 CC examples/nvme/abort/abort.o 00:03:15.961 CXX test/cpp_headers/blob.o 00:03:15.961 LINK scheduler 00:03:15.961 LINK cmb_copy 00:03:15.961 CC test/env/pci/pci_ut.o 00:03:15.961 CXX test/cpp_headers/conf.o 00:03:15.961 CXX test/cpp_headers/config.o 00:03:15.961 CXX test/cpp_headers/cpuset.o 00:03:16.220 CXX test/cpp_headers/crc16.o 00:03:16.220 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.220 CC test/lvol/esnap/esnap.o 00:03:16.220 CXX test/cpp_headers/crc32.o 00:03:16.220 CC test/rpc_client/rpc_client_test.o 00:03:16.220 LINK pmr_persistence 00:03:16.220 CC test/nvme/aer/aer.o 00:03:16.220 CC test/nvme/reset/reset.o 00:03:16.220 LINK abort 00:03:16.220 CXX test/cpp_headers/crc64.o 00:03:16.479 LINK pci_ut 00:03:16.479 CXX test/cpp_headers/dif.o 00:03:16.479 LINK memory_ut 00:03:16.479 CC test/accel/dif/dif.o 00:03:16.479 LINK rpc_client_test 00:03:16.479 CC test/nvme/sgl/sgl.o 00:03:16.479 CXX test/cpp_headers/dma.o 00:03:16.479 LINK reset 00:03:16.479 CXX test/cpp_headers/endian.o 00:03:16.479 LINK aer 00:03:16.737 CXX test/cpp_headers/env_dpdk.o 00:03:16.737 CXX test/cpp_headers/env.o 00:03:16.737 CC examples/nvmf/nvmf/nvmf.o 00:03:16.737 CXX test/cpp_headers/event.o 00:03:16.737 CXX test/cpp_headers/fd_group.o 00:03:16.737 CC test/nvme/e2edp/nvme_dp.o 00:03:16.737 CC test/nvme/overhead/overhead.o 00:03:16.737 CC test/nvme/err_injection/err_injection.o 00:03:16.997 LINK sgl 00:03:16.997 CC test/nvme/startup/startup.o 00:03:16.997 CXX test/cpp_headers/fd.o 00:03:16.997 CC test/nvme/reserve/reserve.o 00:03:16.997 LINK nvmf 00:03:16.997 LINK err_injection 00:03:16.997 CXX test/cpp_headers/file.o 00:03:16.997 LINK startup 00:03:16.997 LINK nvme_dp 00:03:16.997 LINK overhead 00:03:17.256 CXX test/cpp_headers/fsdev.o 00:03:17.256 CC test/nvme/simple_copy/simple_copy.o 00:03:17.256 LINK reserve 00:03:17.256 LINK dif 00:03:17.256 CC test/nvme/connect_stress/connect_stress.o 00:03:17.256 CC test/nvme/boot_partition/boot_partition.o 00:03:17.256 CC test/nvme/compliance/nvme_compliance.o 00:03:17.256 CC test/nvme/fused_ordering/fused_ordering.o 00:03:17.256 CXX test/cpp_headers/fsdev_module.o 00:03:17.514 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:17.515 LINK simple_copy 00:03:17.515 LINK connect_stress 00:03:17.515 LINK boot_partition 00:03:17.515 CC test/nvme/fdp/fdp.o 00:03:17.515 CXX test/cpp_headers/ftl.o 00:03:17.515 LINK fused_ordering 00:03:17.515 CC test/nvme/cuse/cuse.o 00:03:17.515 LINK doorbell_aers 00:03:17.515 CXX test/cpp_headers/fuse_dispatcher.o 00:03:17.515 CXX test/cpp_headers/gpt_spec.o 00:03:17.773 LINK nvme_compliance 00:03:17.773 CXX test/cpp_headers/hexlify.o 00:03:17.773 CXX test/cpp_headers/histogram_data.o 00:03:17.773 CXX test/cpp_headers/idxd.o 00:03:17.773 CXX test/cpp_headers/idxd_spec.o 00:03:17.773 CC test/bdev/bdevio/bdevio.o 00:03:17.773 CXX test/cpp_headers/init.o 00:03:17.773 LINK fdp 00:03:17.773 CXX test/cpp_headers/ioat.o 00:03:17.773 CXX test/cpp_headers/ioat_spec.o 00:03:17.773 CXX test/cpp_headers/iscsi_spec.o 00:03:18.032 CXX test/cpp_headers/json.o 00:03:18.032 CXX test/cpp_headers/jsonrpc.o 00:03:18.032 CXX test/cpp_headers/keyring.o 00:03:18.032 CXX test/cpp_headers/keyring_module.o 00:03:18.032 CXX test/cpp_headers/likely.o 00:03:18.032 CXX test/cpp_headers/log.o 00:03:18.032 CXX test/cpp_headers/lvol.o 00:03:18.032 CXX test/cpp_headers/md5.o 00:03:18.032 CXX test/cpp_headers/memory.o 00:03:18.032 CXX test/cpp_headers/mmio.o 00:03:18.032 CXX test/cpp_headers/nbd.o 00:03:18.290 CXX test/cpp_headers/net.o 00:03:18.290 CXX test/cpp_headers/notify.o 00:03:18.290 CXX test/cpp_headers/nvme.o 00:03:18.290 LINK bdevio 00:03:18.290 CXX test/cpp_headers/nvme_intel.o 00:03:18.290 CXX test/cpp_headers/nvme_ocssd.o 00:03:18.290 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:18.290 CXX test/cpp_headers/nvme_spec.o 00:03:18.290 CXX test/cpp_headers/nvme_zns.o 00:03:18.290 CXX test/cpp_headers/nvmf_cmd.o 00:03:18.290 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:18.549 CXX test/cpp_headers/nvmf.o 00:03:18.549 CXX test/cpp_headers/nvmf_spec.o 00:03:18.549 CXX test/cpp_headers/nvmf_transport.o 00:03:18.549 CXX test/cpp_headers/opal.o 00:03:18.549 CXX test/cpp_headers/opal_spec.o 00:03:18.549 CXX test/cpp_headers/pci_ids.o 00:03:18.549 CXX test/cpp_headers/pipe.o 00:03:18.549 CXX test/cpp_headers/queue.o 00:03:18.549 CXX test/cpp_headers/reduce.o 00:03:18.549 CXX test/cpp_headers/rpc.o 00:03:18.549 CXX test/cpp_headers/scheduler.o 00:03:18.549 CXX test/cpp_headers/scsi.o 00:03:18.549 CXX test/cpp_headers/scsi_spec.o 00:03:18.807 CXX test/cpp_headers/sock.o 00:03:18.807 CXX test/cpp_headers/stdinc.o 00:03:18.807 CXX test/cpp_headers/string.o 00:03:18.807 CXX test/cpp_headers/thread.o 00:03:18.807 CXX test/cpp_headers/trace.o 00:03:18.807 CXX test/cpp_headers/trace_parser.o 00:03:18.807 CXX test/cpp_headers/tree.o 00:03:18.807 CXX test/cpp_headers/ublk.o 00:03:18.807 CXX test/cpp_headers/util.o 00:03:18.807 CXX test/cpp_headers/uuid.o 00:03:18.807 CXX test/cpp_headers/version.o 00:03:18.807 CXX test/cpp_headers/vfio_user_pci.o 00:03:18.807 CXX test/cpp_headers/vfio_user_spec.o 00:03:18.807 CXX test/cpp_headers/vhost.o 00:03:18.807 CXX test/cpp_headers/vmd.o 00:03:18.807 CXX test/cpp_headers/xor.o 00:03:19.067 LINK cuse 00:03:19.067 CXX test/cpp_headers/zipf.o 00:03:21.599 LINK esnap 00:03:22.167 00:03:22.167 real 1m25.376s 00:03:22.167 user 7m19.104s 00:03:22.167 sys 1m39.594s 00:03:22.167 18:44:05 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:22.167 18:44:05 make -- common/autotest_common.sh@10 -- $ set +x 00:03:22.167 ************************************ 00:03:22.167 END TEST make 00:03:22.167 ************************************ 00:03:22.167 18:44:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:22.167 18:44:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:22.167 18:44:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:22.167 18:44:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.167 18:44:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:22.167 18:44:05 -- pm/common@44 -- $ pid=5468 00:03:22.167 18:44:05 -- pm/common@50 -- $ kill -TERM 5468 00:03:22.167 18:44:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.167 18:44:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:22.167 18:44:05 -- pm/common@44 -- $ pid=5470 00:03:22.167 18:44:05 -- pm/common@50 -- $ kill -TERM 5470 00:03:22.167 18:44:05 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:22.167 18:44:05 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:22.167 18:44:05 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:22.167 18:44:05 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:22.167 18:44:05 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:22.427 18:44:05 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:22.427 18:44:05 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:22.427 18:44:05 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:22.427 18:44:05 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:22.427 18:44:05 -- scripts/common.sh@336 -- # IFS=.-: 00:03:22.427 18:44:05 -- scripts/common.sh@336 -- # read -ra ver1 00:03:22.427 18:44:05 -- scripts/common.sh@337 -- # IFS=.-: 00:03:22.427 18:44:05 -- scripts/common.sh@337 -- # read -ra ver2 00:03:22.427 18:44:05 -- scripts/common.sh@338 -- # local 'op=<' 00:03:22.427 18:44:05 -- scripts/common.sh@340 -- # ver1_l=2 00:03:22.427 18:44:05 -- scripts/common.sh@341 -- # ver2_l=1 00:03:22.427 18:44:05 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:22.427 18:44:05 -- scripts/common.sh@344 -- # case "$op" in 00:03:22.427 18:44:05 -- scripts/common.sh@345 -- # : 1 00:03:22.427 18:44:05 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:22.427 18:44:05 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:22.427 18:44:05 -- scripts/common.sh@365 -- # decimal 1 00:03:22.427 18:44:05 -- scripts/common.sh@353 -- # local d=1 00:03:22.427 18:44:05 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:22.427 18:44:05 -- scripts/common.sh@355 -- # echo 1 00:03:22.427 18:44:05 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:22.427 18:44:05 -- scripts/common.sh@366 -- # decimal 2 00:03:22.427 18:44:05 -- scripts/common.sh@353 -- # local d=2 00:03:22.427 18:44:05 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:22.427 18:44:05 -- scripts/common.sh@355 -- # echo 2 00:03:22.427 18:44:05 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:22.427 18:44:05 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:22.427 18:44:05 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:22.427 18:44:05 -- scripts/common.sh@368 -- # return 0 00:03:22.427 18:44:05 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:22.427 18:44:05 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.427 --rc genhtml_branch_coverage=1 00:03:22.427 --rc genhtml_function_coverage=1 00:03:22.427 --rc genhtml_legend=1 00:03:22.427 --rc geninfo_all_blocks=1 00:03:22.427 --rc geninfo_unexecuted_blocks=1 00:03:22.427 00:03:22.427 ' 00:03:22.427 18:44:05 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.427 --rc genhtml_branch_coverage=1 00:03:22.427 --rc genhtml_function_coverage=1 00:03:22.427 --rc genhtml_legend=1 00:03:22.427 --rc geninfo_all_blocks=1 00:03:22.427 --rc geninfo_unexecuted_blocks=1 00:03:22.427 00:03:22.427 ' 00:03:22.427 18:44:05 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.427 --rc genhtml_branch_coverage=1 00:03:22.427 --rc genhtml_function_coverage=1 00:03:22.427 --rc genhtml_legend=1 00:03:22.427 --rc geninfo_all_blocks=1 00:03:22.427 --rc geninfo_unexecuted_blocks=1 00:03:22.427 00:03:22.427 ' 00:03:22.427 18:44:05 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:22.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:22.427 --rc genhtml_branch_coverage=1 00:03:22.427 --rc genhtml_function_coverage=1 00:03:22.427 --rc genhtml_legend=1 00:03:22.427 --rc geninfo_all_blocks=1 00:03:22.427 --rc geninfo_unexecuted_blocks=1 00:03:22.427 00:03:22.427 ' 00:03:22.427 18:44:05 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:22.427 18:44:05 -- nvmf/common.sh@7 -- # uname -s 00:03:22.427 18:44:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:22.427 18:44:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:22.427 18:44:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:22.427 18:44:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:22.427 18:44:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:22.427 18:44:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:22.427 18:44:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:22.427 18:44:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:22.427 18:44:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:22.427 18:44:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:22.427 18:44:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79960c01-01ef-4d83-be4c-a620e9048765 00:03:22.427 18:44:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=79960c01-01ef-4d83-be4c-a620e9048765 00:03:22.427 18:44:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:22.427 18:44:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:22.427 18:44:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:22.427 18:44:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:22.427 18:44:05 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:22.427 18:44:05 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:22.427 18:44:05 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:22.427 18:44:05 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:22.427 18:44:05 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:22.427 18:44:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.427 18:44:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.427 18:44:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.427 18:44:05 -- paths/export.sh@5 -- # export PATH 00:03:22.427 18:44:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:22.427 18:44:05 -- nvmf/common.sh@51 -- # : 0 00:03:22.427 18:44:05 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:22.427 18:44:05 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:22.427 18:44:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:22.427 18:44:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:22.427 18:44:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:22.427 18:44:05 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:22.427 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:22.427 18:44:05 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:22.427 18:44:05 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:22.427 18:44:05 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:22.427 18:44:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:22.427 18:44:05 -- spdk/autotest.sh@32 -- # uname -s 00:03:22.427 18:44:05 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:22.427 18:44:05 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:22.427 18:44:05 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.428 18:44:05 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:22.428 18:44:05 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:22.428 18:44:05 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:22.428 18:44:05 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:22.428 18:44:05 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:22.428 18:44:05 -- spdk/autotest.sh@48 -- # udevadm_pid=54469 00:03:22.428 18:44:05 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:22.428 18:44:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:22.428 18:44:05 -- pm/common@17 -- # local monitor 00:03:22.428 18:44:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.428 18:44:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:22.428 18:44:05 -- pm/common@25 -- # sleep 1 00:03:22.428 18:44:05 -- pm/common@21 -- # date +%s 00:03:22.428 18:44:05 -- pm/common@21 -- # date +%s 00:03:22.428 18:44:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731782645 00:03:22.428 18:44:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731782645 00:03:22.428 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731782645_collect-vmstat.pm.log 00:03:22.428 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731782645_collect-cpu-load.pm.log 00:03:23.372 18:44:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:23.372 18:44:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:23.372 18:44:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:23.372 18:44:06 -- common/autotest_common.sh@10 -- # set +x 00:03:23.662 18:44:06 -- spdk/autotest.sh@59 -- # create_test_list 00:03:23.662 18:44:06 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:23.662 18:44:06 -- common/autotest_common.sh@10 -- # set +x 00:03:23.662 18:44:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:23.662 18:44:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:23.662 18:44:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:23.662 18:44:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:23.662 18:44:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:23.662 18:44:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:23.662 18:44:06 -- common/autotest_common.sh@1457 -- # uname 00:03:23.662 18:44:06 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:23.662 18:44:06 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:23.662 18:44:06 -- common/autotest_common.sh@1477 -- # uname 00:03:23.662 18:44:06 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:23.662 18:44:06 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:23.662 18:44:06 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:23.662 lcov: LCOV version 1.15 00:03:23.662 18:44:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:38.557 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:38.557 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:53.474 18:44:35 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:53.474 18:44:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.474 18:44:35 -- common/autotest_common.sh@10 -- # set +x 00:03:53.474 18:44:35 -- spdk/autotest.sh@78 -- # rm -f 00:03:53.474 18:44:35 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:53.474 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:53.474 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:53.474 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:53.474 18:44:36 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:53.474 18:44:36 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:53.474 18:44:36 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:53.474 18:44:36 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:53.474 18:44:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:53.474 18:44:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:53.474 18:44:36 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:53.474 18:44:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:53.474 18:44:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.474 18:44:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:53.474 18:44:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:03:53.474 18:44:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:53.474 18:44:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:53.474 18:44:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.474 18:44:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:53.474 18:44:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:03:53.474 18:44:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:53.474 18:44:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:53.474 18:44:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.474 18:44:36 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:53.474 18:44:36 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:03:53.474 18:44:36 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:53.474 18:44:36 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:53.474 18:44:36 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:53.474 18:44:36 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:53.474 18:44:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.474 18:44:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.474 18:44:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:53.474 18:44:36 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:53.474 18:44:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:53.474 No valid GPT data, bailing 00:03:53.474 18:44:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:53.474 18:44:36 -- scripts/common.sh@394 -- # pt= 00:03:53.474 18:44:36 -- scripts/common.sh@395 -- # return 1 00:03:53.474 18:44:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:53.474 1+0 records in 00:03:53.474 1+0 records out 00:03:53.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00607616 s, 173 MB/s 00:03:53.474 18:44:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.474 18:44:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.474 18:44:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:53.474 18:44:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:53.474 18:44:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:53.474 No valid GPT data, bailing 00:03:53.474 18:44:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:53.474 18:44:36 -- scripts/common.sh@394 -- # pt= 00:03:53.474 18:44:36 -- scripts/common.sh@395 -- # return 1 00:03:53.474 18:44:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:53.474 1+0 records in 00:03:53.474 1+0 records out 00:03:53.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432003 s, 243 MB/s 00:03:53.474 18:44:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.474 18:44:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.474 18:44:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:53.474 18:44:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:53.474 18:44:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:53.474 No valid GPT data, bailing 00:03:53.746 18:44:36 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:53.746 18:44:36 -- scripts/common.sh@394 -- # pt= 00:03:53.746 18:44:36 -- scripts/common.sh@395 -- # return 1 00:03:53.746 18:44:36 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:53.746 1+0 records in 00:03:53.746 1+0 records out 00:03:53.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00689631 s, 152 MB/s 00:03:53.746 18:44:36 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:53.746 18:44:36 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:53.746 18:44:36 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:53.746 18:44:36 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:53.746 18:44:36 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:53.746 No valid GPT data, bailing 00:03:53.746 18:44:37 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:53.746 18:44:37 -- scripts/common.sh@394 -- # pt= 00:03:53.746 18:44:37 -- scripts/common.sh@395 -- # return 1 00:03:53.746 18:44:37 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:53.746 1+0 records in 00:03:53.746 1+0 records out 00:03:53.746 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042344 s, 248 MB/s 00:03:53.746 18:44:37 -- spdk/autotest.sh@105 -- # sync 00:03:53.746 18:44:37 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:53.746 18:44:37 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:53.746 18:44:37 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.040 18:44:39 -- spdk/autotest.sh@111 -- # uname -s 00:03:57.040 18:44:39 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:57.040 18:44:39 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:57.040 18:44:39 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:57.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:57.299 Hugepages 00:03:57.299 node hugesize free / total 00:03:57.299 node0 1048576kB 0 / 0 00:03:57.299 node0 2048kB 0 / 0 00:03:57.299 00:03:57.299 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.299 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:57.557 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:57.557 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:57.557 18:44:40 -- spdk/autotest.sh@117 -- # uname -s 00:03:57.557 18:44:40 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:57.557 18:44:40 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:57.557 18:44:40 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:58.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.494 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.753 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:03:58.753 18:44:42 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:59.692 18:44:43 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:59.692 18:44:43 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:59.692 18:44:43 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:59.692 18:44:43 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:59.692 18:44:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:59.692 18:44:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:59.692 18:44:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:59.692 18:44:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:59.692 18:44:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:59.692 18:44:43 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:03:59.692 18:44:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:03:59.692 18:44:43 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:00.269 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.269 Waiting for block devices as requested 00:04:00.544 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.544 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:00.544 18:44:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.544 18:44:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:00.544 18:44:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:00.545 18:44:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:00.545 18:44:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.545 18:44:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:00.545 18:44:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:00.545 18:44:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:00.545 18:44:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:00.545 18:44:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:00.545 18:44:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:00.545 18:44:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.545 18:44:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.545 18:44:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:00.545 18:44:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.545 18:44:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.545 18:44:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:00.545 18:44:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.545 18:44:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.545 18:44:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.545 18:44:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.545 18:44:43 -- common/autotest_common.sh@1543 -- # continue 00:04:00.545 18:44:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:00.545 18:44:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:00.545 18:44:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:00.545 18:44:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:00.545 18:44:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.545 18:44:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:00.545 18:44:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:00.545 18:44:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:00.545 18:44:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:00.545 18:44:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:00.545 18:44:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:00.545 18:44:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:00.545 18:44:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:00.545 18:44:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:00.545 18:44:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:00.545 18:44:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:00.545 18:44:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:00.545 18:44:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:00.545 18:44:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:00.805 18:44:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:00.805 18:44:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:00.805 18:44:44 -- common/autotest_common.sh@1543 -- # continue 00:04:00.805 18:44:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:00.805 18:44:44 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:00.805 18:44:44 -- common/autotest_common.sh@10 -- # set +x 00:04:00.805 18:44:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:00.805 18:44:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:00.805 18:44:44 -- common/autotest_common.sh@10 -- # set +x 00:04:00.805 18:44:44 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:01.743 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.743 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.743 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:01.743 18:44:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:01.743 18:44:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:01.743 18:44:45 -- common/autotest_common.sh@10 -- # set +x 00:04:01.743 18:44:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:01.743 18:44:45 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:01.743 18:44:45 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:01.743 18:44:45 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:01.743 18:44:45 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:01.743 18:44:45 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:01.743 18:44:45 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:01.743 18:44:45 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:01.743 18:44:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.743 18:44:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.743 18:44:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.743 18:44:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.743 18:44:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:02.003 18:44:45 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:02.003 18:44:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:02.003 18:44:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:02.003 18:44:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:02.003 18:44:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:02.003 18:44:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:02.003 18:44:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:02.003 18:44:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:02.003 18:44:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:02.003 18:44:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:02.003 18:44:45 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:02.003 18:44:45 -- common/autotest_common.sh@1572 -- # return 0 00:04:02.003 18:44:45 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:02.003 18:44:45 -- common/autotest_common.sh@1580 -- # return 0 00:04:02.003 18:44:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:02.003 18:44:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:02.003 18:44:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.003 18:44:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:02.003 18:44:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:02.003 18:44:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:02.003 18:44:45 -- common/autotest_common.sh@10 -- # set +x 00:04:02.003 18:44:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:02.003 18:44:45 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:02.003 18:44:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.003 18:44:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.003 18:44:45 -- common/autotest_common.sh@10 -- # set +x 00:04:02.003 ************************************ 00:04:02.003 START TEST env 00:04:02.003 ************************************ 00:04:02.003 18:44:45 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:02.003 * Looking for test storage... 00:04:02.003 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:02.003 18:44:45 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:02.003 18:44:45 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:02.003 18:44:45 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:02.263 18:44:45 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:02.263 18:44:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:02.263 18:44:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:02.263 18:44:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:02.263 18:44:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:02.263 18:44:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:02.263 18:44:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:02.263 18:44:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:02.263 18:44:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:02.263 18:44:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:02.263 18:44:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:02.263 18:44:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:02.263 18:44:45 env -- scripts/common.sh@344 -- # case "$op" in 00:04:02.263 18:44:45 env -- scripts/common.sh@345 -- # : 1 00:04:02.263 18:44:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:02.263 18:44:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:02.263 18:44:45 env -- scripts/common.sh@365 -- # decimal 1 00:04:02.263 18:44:45 env -- scripts/common.sh@353 -- # local d=1 00:04:02.263 18:44:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:02.263 18:44:45 env -- scripts/common.sh@355 -- # echo 1 00:04:02.263 18:44:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:02.263 18:44:45 env -- scripts/common.sh@366 -- # decimal 2 00:04:02.263 18:44:45 env -- scripts/common.sh@353 -- # local d=2 00:04:02.263 18:44:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:02.263 18:44:45 env -- scripts/common.sh@355 -- # echo 2 00:04:02.263 18:44:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:02.263 18:44:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:02.263 18:44:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:02.263 18:44:45 env -- scripts/common.sh@368 -- # return 0 00:04:02.263 18:44:45 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:02.263 18:44:45 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.263 --rc genhtml_branch_coverage=1 00:04:02.263 --rc genhtml_function_coverage=1 00:04:02.263 --rc genhtml_legend=1 00:04:02.263 --rc geninfo_all_blocks=1 00:04:02.263 --rc geninfo_unexecuted_blocks=1 00:04:02.263 00:04:02.263 ' 00:04:02.263 18:44:45 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.263 --rc genhtml_branch_coverage=1 00:04:02.263 --rc genhtml_function_coverage=1 00:04:02.263 --rc genhtml_legend=1 00:04:02.263 --rc geninfo_all_blocks=1 00:04:02.263 --rc geninfo_unexecuted_blocks=1 00:04:02.263 00:04:02.263 ' 00:04:02.263 18:44:45 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.263 --rc genhtml_branch_coverage=1 00:04:02.263 --rc genhtml_function_coverage=1 00:04:02.263 --rc genhtml_legend=1 00:04:02.263 --rc geninfo_all_blocks=1 00:04:02.263 --rc geninfo_unexecuted_blocks=1 00:04:02.263 00:04:02.263 ' 00:04:02.263 18:44:45 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:02.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:02.263 --rc genhtml_branch_coverage=1 00:04:02.263 --rc genhtml_function_coverage=1 00:04:02.263 --rc genhtml_legend=1 00:04:02.263 --rc geninfo_all_blocks=1 00:04:02.263 --rc geninfo_unexecuted_blocks=1 00:04:02.263 00:04:02.263 ' 00:04:02.263 18:44:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.263 18:44:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.263 18:44:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.263 18:44:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.263 ************************************ 00:04:02.263 START TEST env_memory 00:04:02.263 ************************************ 00:04:02.263 18:44:45 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:02.263 00:04:02.263 00:04:02.263 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.263 http://cunit.sourceforge.net/ 00:04:02.263 00:04:02.263 00:04:02.263 Suite: memory 00:04:02.263 Test: alloc and free memory map ...[2024-11-16 18:44:45.603797] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:02.263 passed 00:04:02.263 Test: mem map translation ...[2024-11-16 18:44:45.649862] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:02.263 [2024-11-16 18:44:45.649964] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:02.263 [2024-11-16 18:44:45.650038] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:02.263 [2024-11-16 18:44:45.650063] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:02.263 passed 00:04:02.263 Test: mem map registration ...[2024-11-16 18:44:45.719758] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:02.263 [2024-11-16 18:44:45.719831] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:02.523 passed 00:04:02.523 Test: mem map adjacent registrations ...passed 00:04:02.523 00:04:02.523 Run Summary: Type Total Ran Passed Failed Inactive 00:04:02.523 suites 1 1 n/a 0 0 00:04:02.523 tests 4 4 4 0 0 00:04:02.523 asserts 152 152 152 0 n/a 00:04:02.523 00:04:02.523 Elapsed time = 0.249 seconds 00:04:02.523 ************************************ 00:04:02.523 END TEST env_memory 00:04:02.523 ************************************ 00:04:02.523 00:04:02.523 real 0m0.304s 00:04:02.523 user 0m0.265s 00:04:02.523 sys 0m0.027s 00:04:02.523 18:44:45 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:02.523 18:44:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:02.523 18:44:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:02.523 18:44:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.523 18:44:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.523 18:44:45 env -- common/autotest_common.sh@10 -- # set +x 00:04:02.523 ************************************ 00:04:02.523 START TEST env_vtophys 00:04:02.523 ************************************ 00:04:02.523 18:44:45 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:02.523 EAL: lib.eal log level changed from notice to debug 00:04:02.523 EAL: Detected lcore 0 as core 0 on socket 0 00:04:02.523 EAL: Detected lcore 1 as core 0 on socket 0 00:04:02.523 EAL: Detected lcore 2 as core 0 on socket 0 00:04:02.523 EAL: Detected lcore 3 as core 0 on socket 0 00:04:02.523 EAL: Detected lcore 4 as core 0 on socket 0 00:04:02.523 EAL: Detected lcore 5 as core 0 on socket 0 00:04:02.523 EAL: Detected lcore 6 as core 0 on socket 0 00:04:02.523 EAL: Detected lcore 7 as core 0 on socket 0 00:04:02.523 EAL: Detected lcore 8 as core 0 on socket 0 00:04:02.523 EAL: Detected lcore 9 as core 0 on socket 0 00:04:02.523 EAL: Maximum logical cores by configuration: 128 00:04:02.524 EAL: Detected CPU lcores: 10 00:04:02.524 EAL: Detected NUMA nodes: 1 00:04:02.524 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:02.524 EAL: Detected shared linkage of DPDK 00:04:02.524 EAL: No shared files mode enabled, IPC will be disabled 00:04:02.524 EAL: Selected IOVA mode 'PA' 00:04:02.524 EAL: Probing VFIO support... 00:04:02.524 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:02.524 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:02.524 EAL: Ask a virtual area of 0x2e000 bytes 00:04:02.524 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:02.524 EAL: Setting up physically contiguous memory... 00:04:02.524 EAL: Setting maximum number of open files to 524288 00:04:02.524 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:02.524 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:02.524 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.524 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:02.524 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.524 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.524 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:02.524 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:02.524 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.524 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:02.524 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.524 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.524 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:02.524 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:02.524 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.524 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:02.524 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.524 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.524 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:02.524 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:02.524 EAL: Ask a virtual area of 0x61000 bytes 00:04:02.524 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:02.524 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:02.524 EAL: Ask a virtual area of 0x400000000 bytes 00:04:02.524 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:02.524 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:02.524 EAL: Hugepages will be freed exactly as allocated. 00:04:02.524 EAL: No shared files mode enabled, IPC is disabled 00:04:02.524 EAL: No shared files mode enabled, IPC is disabled 00:04:02.783 EAL: TSC frequency is ~2290000 KHz 00:04:02.783 EAL: Main lcore 0 is ready (tid=7fd99d2a4a40;cpuset=[0]) 00:04:02.783 EAL: Trying to obtain current memory policy. 00:04:02.783 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:02.783 EAL: Restoring previous memory policy: 0 00:04:02.783 EAL: request: mp_malloc_sync 00:04:02.783 EAL: No shared files mode enabled, IPC is disabled 00:04:02.783 EAL: Heap on socket 0 was expanded by 2MB 00:04:02.783 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:02.783 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:02.783 EAL: Mem event callback 'spdk:(nil)' registered 00:04:02.783 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:02.783 00:04:02.783 00:04:02.783 CUnit - A unit testing framework for C - Version 2.1-3 00:04:02.783 http://cunit.sourceforge.net/ 00:04:02.783 00:04:02.783 00:04:02.783 Suite: components_suite 00:04:03.042 Test: vtophys_malloc_test ...passed 00:04:03.042 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:03.042 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.042 EAL: Restoring previous memory policy: 4 00:04:03.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.043 EAL: request: mp_malloc_sync 00:04:03.043 EAL: No shared files mode enabled, IPC is disabled 00:04:03.043 EAL: Heap on socket 0 was expanded by 4MB 00:04:03.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.043 EAL: request: mp_malloc_sync 00:04:03.043 EAL: No shared files mode enabled, IPC is disabled 00:04:03.043 EAL: Heap on socket 0 was shrunk by 4MB 00:04:03.043 EAL: Trying to obtain current memory policy. 00:04:03.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.043 EAL: Restoring previous memory policy: 4 00:04:03.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.043 EAL: request: mp_malloc_sync 00:04:03.043 EAL: No shared files mode enabled, IPC is disabled 00:04:03.043 EAL: Heap on socket 0 was expanded by 6MB 00:04:03.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.043 EAL: request: mp_malloc_sync 00:04:03.043 EAL: No shared files mode enabled, IPC is disabled 00:04:03.043 EAL: Heap on socket 0 was shrunk by 6MB 00:04:03.043 EAL: Trying to obtain current memory policy. 00:04:03.043 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.043 EAL: Restoring previous memory policy: 4 00:04:03.043 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.043 EAL: request: mp_malloc_sync 00:04:03.043 EAL: No shared files mode enabled, IPC is disabled 00:04:03.043 EAL: Heap on socket 0 was expanded by 10MB 00:04:03.302 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.302 EAL: request: mp_malloc_sync 00:04:03.302 EAL: No shared files mode enabled, IPC is disabled 00:04:03.302 EAL: Heap on socket 0 was shrunk by 10MB 00:04:03.302 EAL: Trying to obtain current memory policy. 00:04:03.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.302 EAL: Restoring previous memory policy: 4 00:04:03.302 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.302 EAL: request: mp_malloc_sync 00:04:03.302 EAL: No shared files mode enabled, IPC is disabled 00:04:03.302 EAL: Heap on socket 0 was expanded by 18MB 00:04:03.302 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.302 EAL: request: mp_malloc_sync 00:04:03.302 EAL: No shared files mode enabled, IPC is disabled 00:04:03.302 EAL: Heap on socket 0 was shrunk by 18MB 00:04:03.302 EAL: Trying to obtain current memory policy. 00:04:03.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.302 EAL: Restoring previous memory policy: 4 00:04:03.302 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.302 EAL: request: mp_malloc_sync 00:04:03.302 EAL: No shared files mode enabled, IPC is disabled 00:04:03.302 EAL: Heap on socket 0 was expanded by 34MB 00:04:03.302 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.302 EAL: request: mp_malloc_sync 00:04:03.302 EAL: No shared files mode enabled, IPC is disabled 00:04:03.302 EAL: Heap on socket 0 was shrunk by 34MB 00:04:03.302 EAL: Trying to obtain current memory policy. 00:04:03.302 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.302 EAL: Restoring previous memory policy: 4 00:04:03.302 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.302 EAL: request: mp_malloc_sync 00:04:03.302 EAL: No shared files mode enabled, IPC is disabled 00:04:03.302 EAL: Heap on socket 0 was expanded by 66MB 00:04:03.561 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.561 EAL: request: mp_malloc_sync 00:04:03.561 EAL: No shared files mode enabled, IPC is disabled 00:04:03.561 EAL: Heap on socket 0 was shrunk by 66MB 00:04:03.561 EAL: Trying to obtain current memory policy. 00:04:03.561 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:03.562 EAL: Restoring previous memory policy: 4 00:04:03.562 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.562 EAL: request: mp_malloc_sync 00:04:03.562 EAL: No shared files mode enabled, IPC is disabled 00:04:03.562 EAL: Heap on socket 0 was expanded by 130MB 00:04:03.821 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.821 EAL: request: mp_malloc_sync 00:04:03.821 EAL: No shared files mode enabled, IPC is disabled 00:04:03.821 EAL: Heap on socket 0 was shrunk by 130MB 00:04:04.080 EAL: Trying to obtain current memory policy. 00:04:04.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:04.080 EAL: Restoring previous memory policy: 4 00:04:04.080 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.080 EAL: request: mp_malloc_sync 00:04:04.080 EAL: No shared files mode enabled, IPC is disabled 00:04:04.080 EAL: Heap on socket 0 was expanded by 258MB 00:04:04.649 EAL: Calling mem event callback 'spdk:(nil)' 00:04:04.649 EAL: request: mp_malloc_sync 00:04:04.649 EAL: No shared files mode enabled, IPC is disabled 00:04:04.649 EAL: Heap on socket 0 was shrunk by 258MB 00:04:05.219 EAL: Trying to obtain current memory policy. 00:04:05.219 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:05.219 EAL: Restoring previous memory policy: 4 00:04:05.219 EAL: Calling mem event callback 'spdk:(nil)' 00:04:05.219 EAL: request: mp_malloc_sync 00:04:05.219 EAL: No shared files mode enabled, IPC is disabled 00:04:05.219 EAL: Heap on socket 0 was expanded by 514MB 00:04:06.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:06.157 EAL: request: mp_malloc_sync 00:04:06.157 EAL: No shared files mode enabled, IPC is disabled 00:04:06.157 EAL: Heap on socket 0 was shrunk by 514MB 00:04:07.098 EAL: Trying to obtain current memory policy. 00:04:07.098 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:07.098 EAL: Restoring previous memory policy: 4 00:04:07.098 EAL: Calling mem event callback 'spdk:(nil)' 00:04:07.098 EAL: request: mp_malloc_sync 00:04:07.098 EAL: No shared files mode enabled, IPC is disabled 00:04:07.098 EAL: Heap on socket 0 was expanded by 1026MB 00:04:09.019 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.019 EAL: request: mp_malloc_sync 00:04:09.019 EAL: No shared files mode enabled, IPC is disabled 00:04:09.019 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:10.926 passed 00:04:10.926 00:04:10.926 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.926 suites 1 1 n/a 0 0 00:04:10.926 tests 2 2 2 0 0 00:04:10.926 asserts 5789 5789 5789 0 n/a 00:04:10.926 00:04:10.926 Elapsed time = 7.819 seconds 00:04:10.926 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.926 EAL: request: mp_malloc_sync 00:04:10.926 EAL: No shared files mode enabled, IPC is disabled 00:04:10.926 EAL: Heap on socket 0 was shrunk by 2MB 00:04:10.926 EAL: No shared files mode enabled, IPC is disabled 00:04:10.926 EAL: No shared files mode enabled, IPC is disabled 00:04:10.926 EAL: No shared files mode enabled, IPC is disabled 00:04:10.926 00:04:10.927 real 0m8.116s 00:04:10.927 user 0m7.162s 00:04:10.927 sys 0m0.802s 00:04:10.927 18:44:54 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.927 18:44:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:10.927 ************************************ 00:04:10.927 END TEST env_vtophys 00:04:10.927 ************************************ 00:04:10.927 18:44:54 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:10.927 18:44:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:10.927 18:44:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.927 18:44:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.927 ************************************ 00:04:10.927 START TEST env_pci 00:04:10.927 ************************************ 00:04:10.927 18:44:54 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:10.927 00:04:10.927 00:04:10.927 CUnit - A unit testing framework for C - Version 2.1-3 00:04:10.927 http://cunit.sourceforge.net/ 00:04:10.927 00:04:10.927 00:04:10.927 Suite: pci 00:04:10.927 Test: pci_hook ...[2024-11-16 18:44:54.112300] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56761 has claimed it 00:04:10.927 passed 00:04:10.927 00:04:10.927 Run Summary: Type Total Ran Passed Failed Inactive 00:04:10.927 suites 1 1 n/a 0 0 00:04:10.927 tests 1 1 1 0 0 00:04:10.927 asserts 25 25 25 0 n/a 00:04:10.927 00:04:10.927 Elapsed time = 0.007 seconds 00:04:10.927 EAL: Cannot find device (10000:00:01.0) 00:04:10.927 EAL: Failed to attach device on primary process 00:04:10.927 00:04:10.927 real 0m0.106s 00:04:10.927 user 0m0.036s 00:04:10.927 sys 0m0.070s 00:04:10.927 18:44:54 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:10.927 18:44:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:10.927 ************************************ 00:04:10.927 END TEST env_pci 00:04:10.927 ************************************ 00:04:10.927 18:44:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:10.927 18:44:54 env -- env/env.sh@15 -- # uname 00:04:10.927 18:44:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:10.927 18:44:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:10.927 18:44:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.927 18:44:54 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:10.927 18:44:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:10.927 18:44:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.927 ************************************ 00:04:10.927 START TEST env_dpdk_post_init 00:04:10.927 ************************************ 00:04:10.927 18:44:54 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:10.927 EAL: Detected CPU lcores: 10 00:04:10.927 EAL: Detected NUMA nodes: 1 00:04:10.927 EAL: Detected shared linkage of DPDK 00:04:10.927 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:10.927 EAL: Selected IOVA mode 'PA' 00:04:11.187 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:11.187 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:11.187 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:11.187 Starting DPDK initialization... 00:04:11.187 Starting SPDK post initialization... 00:04:11.187 SPDK NVMe probe 00:04:11.187 Attaching to 0000:00:10.0 00:04:11.187 Attaching to 0000:00:11.0 00:04:11.187 Attached to 0000:00:10.0 00:04:11.187 Attached to 0000:00:11.0 00:04:11.187 Cleaning up... 00:04:11.187 00:04:11.187 real 0m0.279s 00:04:11.187 user 0m0.095s 00:04:11.187 sys 0m0.083s 00:04:11.187 18:44:54 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.187 18:44:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.187 ************************************ 00:04:11.187 END TEST env_dpdk_post_init 00:04:11.187 ************************************ 00:04:11.187 18:44:54 env -- env/env.sh@26 -- # uname 00:04:11.187 18:44:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:11.187 18:44:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.187 18:44:54 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.187 18:44:54 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.187 18:44:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.187 ************************************ 00:04:11.187 START TEST env_mem_callbacks 00:04:11.187 ************************************ 00:04:11.187 18:44:54 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:11.187 EAL: Detected CPU lcores: 10 00:04:11.187 EAL: Detected NUMA nodes: 1 00:04:11.187 EAL: Detected shared linkage of DPDK 00:04:11.446 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:11.446 EAL: Selected IOVA mode 'PA' 00:04:11.446 00:04:11.446 00:04:11.446 CUnit - A unit testing framework for C - Version 2.1-3 00:04:11.446 http://cunit.sourceforge.net/ 00:04:11.446 00:04:11.446 00:04:11.446 Suite: memory 00:04:11.446 Test: test ... 00:04:11.446 register 0x200000200000 2097152 00:04:11.446 malloc 3145728 00:04:11.446 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:11.446 register 0x200000400000 4194304 00:04:11.446 buf 0x2000004fffc0 len 3145728 PASSED 00:04:11.446 malloc 64 00:04:11.446 buf 0x2000004ffec0 len 64 PASSED 00:04:11.446 malloc 4194304 00:04:11.446 register 0x200000800000 6291456 00:04:11.447 buf 0x2000009fffc0 len 4194304 PASSED 00:04:11.447 free 0x2000004fffc0 3145728 00:04:11.447 free 0x2000004ffec0 64 00:04:11.447 unregister 0x200000400000 4194304 PASSED 00:04:11.447 free 0x2000009fffc0 4194304 00:04:11.447 unregister 0x200000800000 6291456 PASSED 00:04:11.447 malloc 8388608 00:04:11.447 register 0x200000400000 10485760 00:04:11.447 buf 0x2000005fffc0 len 8388608 PASSED 00:04:11.447 free 0x2000005fffc0 8388608 00:04:11.447 unregister 0x200000400000 10485760 PASSED 00:04:11.447 passed 00:04:11.447 00:04:11.447 Run Summary: Type Total Ran Passed Failed Inactive 00:04:11.447 suites 1 1 n/a 0 0 00:04:11.447 tests 1 1 1 0 0 00:04:11.447 asserts 15 15 15 0 n/a 00:04:11.447 00:04:11.447 Elapsed time = 0.087 seconds 00:04:11.447 00:04:11.447 real 0m0.288s 00:04:11.447 user 0m0.115s 00:04:11.447 sys 0m0.070s 00:04:11.447 18:44:54 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.447 18:44:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:11.447 ************************************ 00:04:11.447 END TEST env_mem_callbacks 00:04:11.447 ************************************ 00:04:11.706 00:04:11.706 real 0m9.649s 00:04:11.706 user 0m7.901s 00:04:11.706 sys 0m1.393s 00:04:11.706 18:44:54 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:11.706 18:44:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:11.706 ************************************ 00:04:11.706 END TEST env 00:04:11.706 ************************************ 00:04:11.706 18:44:54 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:11.706 18:44:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:11.706 18:44:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:11.706 18:44:54 -- common/autotest_common.sh@10 -- # set +x 00:04:11.706 ************************************ 00:04:11.706 START TEST rpc 00:04:11.706 ************************************ 00:04:11.706 18:44:54 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:11.706 * Looking for test storage... 00:04:11.706 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.706 18:44:55 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:11.706 18:44:55 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:11.706 18:44:55 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:11.967 18:44:55 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:11.967 18:44:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:11.967 18:44:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:11.967 18:44:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:11.967 18:44:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:11.967 18:44:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:11.967 18:44:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:11.967 18:44:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:11.967 18:44:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:11.967 18:44:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:11.967 18:44:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:11.967 18:44:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:11.967 18:44:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:11.967 18:44:55 rpc -- scripts/common.sh@345 -- # : 1 00:04:11.967 18:44:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:11.967 18:44:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:11.967 18:44:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:11.967 18:44:55 rpc -- scripts/common.sh@353 -- # local d=1 00:04:11.967 18:44:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:11.967 18:44:55 rpc -- scripts/common.sh@355 -- # echo 1 00:04:11.967 18:44:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:11.967 18:44:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:11.967 18:44:55 rpc -- scripts/common.sh@353 -- # local d=2 00:04:11.967 18:44:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:11.967 18:44:55 rpc -- scripts/common.sh@355 -- # echo 2 00:04:11.967 18:44:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:11.967 18:44:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:11.967 18:44:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:11.967 18:44:55 rpc -- scripts/common.sh@368 -- # return 0 00:04:11.967 18:44:55 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:11.967 18:44:55 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:11.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.967 --rc genhtml_branch_coverage=1 00:04:11.967 --rc genhtml_function_coverage=1 00:04:11.967 --rc genhtml_legend=1 00:04:11.967 --rc geninfo_all_blocks=1 00:04:11.967 --rc geninfo_unexecuted_blocks=1 00:04:11.967 00:04:11.967 ' 00:04:11.967 18:44:55 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:11.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.967 --rc genhtml_branch_coverage=1 00:04:11.967 --rc genhtml_function_coverage=1 00:04:11.967 --rc genhtml_legend=1 00:04:11.967 --rc geninfo_all_blocks=1 00:04:11.967 --rc geninfo_unexecuted_blocks=1 00:04:11.967 00:04:11.967 ' 00:04:11.967 18:44:55 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:11.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.967 --rc genhtml_branch_coverage=1 00:04:11.967 --rc genhtml_function_coverage=1 00:04:11.967 --rc genhtml_legend=1 00:04:11.967 --rc geninfo_all_blocks=1 00:04:11.967 --rc geninfo_unexecuted_blocks=1 00:04:11.967 00:04:11.967 ' 00:04:11.967 18:44:55 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:11.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:11.967 --rc genhtml_branch_coverage=1 00:04:11.967 --rc genhtml_function_coverage=1 00:04:11.967 --rc genhtml_legend=1 00:04:11.967 --rc geninfo_all_blocks=1 00:04:11.967 --rc geninfo_unexecuted_blocks=1 00:04:11.967 00:04:11.968 ' 00:04:11.968 18:44:55 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:11.968 18:44:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56894 00:04:11.968 18:44:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.968 18:44:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56894 00:04:11.968 18:44:55 rpc -- common/autotest_common.sh@835 -- # '[' -z 56894 ']' 00:04:11.968 18:44:55 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.968 18:44:55 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:11.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.968 18:44:55 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.968 18:44:55 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:11.968 18:44:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.968 [2024-11-16 18:44:55.337983] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:11.968 [2024-11-16 18:44:55.338115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56894 ] 00:04:12.227 [2024-11-16 18:44:55.508747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.227 [2024-11-16 18:44:55.620625] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:12.227 [2024-11-16 18:44:55.620720] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56894' to capture a snapshot of events at runtime. 00:04:12.227 [2024-11-16 18:44:55.620730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:12.227 [2024-11-16 18:44:55.620741] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:12.227 [2024-11-16 18:44:55.620749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56894 for offline analysis/debug. 00:04:12.227 [2024-11-16 18:44:55.622090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.165 18:44:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:13.165 18:44:56 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:13.165 18:44:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.165 18:44:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:13.165 18:44:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:13.165 18:44:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:13.165 18:44:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.165 18:44:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.165 18:44:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.165 ************************************ 00:04:13.165 START TEST rpc_integrity 00:04:13.165 ************************************ 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:13.165 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.165 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.165 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:13.165 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:13.165 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.165 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:13.165 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.165 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.165 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:13.165 { 00:04:13.165 "name": "Malloc0", 00:04:13.165 "aliases": [ 00:04:13.165 "13485348-8314-4c8a-af77-e495bee60c09" 00:04:13.165 ], 00:04:13.165 "product_name": "Malloc disk", 00:04:13.165 "block_size": 512, 00:04:13.165 "num_blocks": 16384, 00:04:13.165 "uuid": "13485348-8314-4c8a-af77-e495bee60c09", 00:04:13.165 "assigned_rate_limits": { 00:04:13.165 "rw_ios_per_sec": 0, 00:04:13.165 "rw_mbytes_per_sec": 0, 00:04:13.165 "r_mbytes_per_sec": 0, 00:04:13.165 "w_mbytes_per_sec": 0 00:04:13.165 }, 00:04:13.165 "claimed": false, 00:04:13.165 "zoned": false, 00:04:13.165 "supported_io_types": { 00:04:13.165 "read": true, 00:04:13.165 "write": true, 00:04:13.165 "unmap": true, 00:04:13.165 "flush": true, 00:04:13.165 "reset": true, 00:04:13.165 "nvme_admin": false, 00:04:13.165 "nvme_io": false, 00:04:13.165 "nvme_io_md": false, 00:04:13.165 "write_zeroes": true, 00:04:13.165 "zcopy": true, 00:04:13.165 "get_zone_info": false, 00:04:13.165 "zone_management": false, 00:04:13.165 "zone_append": false, 00:04:13.165 "compare": false, 00:04:13.165 "compare_and_write": false, 00:04:13.165 "abort": true, 00:04:13.165 "seek_hole": false, 00:04:13.165 "seek_data": false, 00:04:13.165 "copy": true, 00:04:13.165 "nvme_iov_md": false 00:04:13.165 }, 00:04:13.165 "memory_domains": [ 00:04:13.165 { 00:04:13.165 "dma_device_id": "system", 00:04:13.165 "dma_device_type": 1 00:04:13.165 }, 00:04:13.165 { 00:04:13.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.165 "dma_device_type": 2 00:04:13.165 } 00:04:13.165 ], 00:04:13.165 "driver_specific": {} 00:04:13.165 } 00:04:13.165 ]' 00:04:13.165 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.425 [2024-11-16 18:44:56.641214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:13.425 [2024-11-16 18:44:56.641306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:13.425 [2024-11-16 18:44:56.641336] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:13.425 [2024-11-16 18:44:56.641350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:13.425 [2024-11-16 18:44:56.643787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:13.425 [2024-11-16 18:44:56.643832] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:13.425 Passthru0 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:13.425 { 00:04:13.425 "name": "Malloc0", 00:04:13.425 "aliases": [ 00:04:13.425 "13485348-8314-4c8a-af77-e495bee60c09" 00:04:13.425 ], 00:04:13.425 "product_name": "Malloc disk", 00:04:13.425 "block_size": 512, 00:04:13.425 "num_blocks": 16384, 00:04:13.425 "uuid": "13485348-8314-4c8a-af77-e495bee60c09", 00:04:13.425 "assigned_rate_limits": { 00:04:13.425 "rw_ios_per_sec": 0, 00:04:13.425 "rw_mbytes_per_sec": 0, 00:04:13.425 "r_mbytes_per_sec": 0, 00:04:13.425 "w_mbytes_per_sec": 0 00:04:13.425 }, 00:04:13.425 "claimed": true, 00:04:13.425 "claim_type": "exclusive_write", 00:04:13.425 "zoned": false, 00:04:13.425 "supported_io_types": { 00:04:13.425 "read": true, 00:04:13.425 "write": true, 00:04:13.425 "unmap": true, 00:04:13.425 "flush": true, 00:04:13.425 "reset": true, 00:04:13.425 "nvme_admin": false, 00:04:13.425 "nvme_io": false, 00:04:13.425 "nvme_io_md": false, 00:04:13.425 "write_zeroes": true, 00:04:13.425 "zcopy": true, 00:04:13.425 "get_zone_info": false, 00:04:13.425 "zone_management": false, 00:04:13.425 "zone_append": false, 00:04:13.425 "compare": false, 00:04:13.425 "compare_and_write": false, 00:04:13.425 "abort": true, 00:04:13.425 "seek_hole": false, 00:04:13.425 "seek_data": false, 00:04:13.425 "copy": true, 00:04:13.425 "nvme_iov_md": false 00:04:13.425 }, 00:04:13.425 "memory_domains": [ 00:04:13.425 { 00:04:13.425 "dma_device_id": "system", 00:04:13.425 "dma_device_type": 1 00:04:13.425 }, 00:04:13.425 { 00:04:13.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.425 "dma_device_type": 2 00:04:13.425 } 00:04:13.425 ], 00:04:13.425 "driver_specific": {} 00:04:13.425 }, 00:04:13.425 { 00:04:13.425 "name": "Passthru0", 00:04:13.425 "aliases": [ 00:04:13.425 "9200da72-8650-59b0-b4fa-84377793888e" 00:04:13.425 ], 00:04:13.425 "product_name": "passthru", 00:04:13.425 "block_size": 512, 00:04:13.425 "num_blocks": 16384, 00:04:13.425 "uuid": "9200da72-8650-59b0-b4fa-84377793888e", 00:04:13.425 "assigned_rate_limits": { 00:04:13.425 "rw_ios_per_sec": 0, 00:04:13.425 "rw_mbytes_per_sec": 0, 00:04:13.425 "r_mbytes_per_sec": 0, 00:04:13.425 "w_mbytes_per_sec": 0 00:04:13.425 }, 00:04:13.425 "claimed": false, 00:04:13.425 "zoned": false, 00:04:13.425 "supported_io_types": { 00:04:13.425 "read": true, 00:04:13.425 "write": true, 00:04:13.425 "unmap": true, 00:04:13.425 "flush": true, 00:04:13.425 "reset": true, 00:04:13.425 "nvme_admin": false, 00:04:13.425 "nvme_io": false, 00:04:13.425 "nvme_io_md": false, 00:04:13.425 "write_zeroes": true, 00:04:13.425 "zcopy": true, 00:04:13.425 "get_zone_info": false, 00:04:13.425 "zone_management": false, 00:04:13.425 "zone_append": false, 00:04:13.425 "compare": false, 00:04:13.425 "compare_and_write": false, 00:04:13.425 "abort": true, 00:04:13.425 "seek_hole": false, 00:04:13.425 "seek_data": false, 00:04:13.425 "copy": true, 00:04:13.425 "nvme_iov_md": false 00:04:13.425 }, 00:04:13.425 "memory_domains": [ 00:04:13.425 { 00:04:13.425 "dma_device_id": "system", 00:04:13.425 "dma_device_type": 1 00:04:13.425 }, 00:04:13.425 { 00:04:13.425 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.425 "dma_device_type": 2 00:04:13.425 } 00:04:13.425 ], 00:04:13.425 "driver_specific": { 00:04:13.425 "passthru": { 00:04:13.425 "name": "Passthru0", 00:04:13.425 "base_bdev_name": "Malloc0" 00:04:13.425 } 00:04:13.425 } 00:04:13.425 } 00:04:13.425 ]' 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:13.425 18:44:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:13.425 00:04:13.425 real 0m0.337s 00:04:13.425 user 0m0.187s 00:04:13.425 sys 0m0.060s 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.425 18:44:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.425 ************************************ 00:04:13.425 END TEST rpc_integrity 00:04:13.425 ************************************ 00:04:13.425 18:44:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:13.425 18:44:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.425 18:44:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.425 18:44:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.425 ************************************ 00:04:13.425 START TEST rpc_plugins 00:04:13.425 ************************************ 00:04:13.425 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:13.425 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:13.425 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.425 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.686 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.686 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:13.686 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:13.686 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.686 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.686 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.686 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:13.686 { 00:04:13.686 "name": "Malloc1", 00:04:13.686 "aliases": [ 00:04:13.686 "ca9ace93-9c17-486d-ac66-5b42b268712d" 00:04:13.686 ], 00:04:13.686 "product_name": "Malloc disk", 00:04:13.686 "block_size": 4096, 00:04:13.686 "num_blocks": 256, 00:04:13.686 "uuid": "ca9ace93-9c17-486d-ac66-5b42b268712d", 00:04:13.686 "assigned_rate_limits": { 00:04:13.686 "rw_ios_per_sec": 0, 00:04:13.686 "rw_mbytes_per_sec": 0, 00:04:13.686 "r_mbytes_per_sec": 0, 00:04:13.686 "w_mbytes_per_sec": 0 00:04:13.686 }, 00:04:13.686 "claimed": false, 00:04:13.686 "zoned": false, 00:04:13.686 "supported_io_types": { 00:04:13.686 "read": true, 00:04:13.686 "write": true, 00:04:13.686 "unmap": true, 00:04:13.686 "flush": true, 00:04:13.686 "reset": true, 00:04:13.686 "nvme_admin": false, 00:04:13.686 "nvme_io": false, 00:04:13.686 "nvme_io_md": false, 00:04:13.686 "write_zeroes": true, 00:04:13.686 "zcopy": true, 00:04:13.686 "get_zone_info": false, 00:04:13.686 "zone_management": false, 00:04:13.686 "zone_append": false, 00:04:13.686 "compare": false, 00:04:13.686 "compare_and_write": false, 00:04:13.686 "abort": true, 00:04:13.686 "seek_hole": false, 00:04:13.686 "seek_data": false, 00:04:13.686 "copy": true, 00:04:13.686 "nvme_iov_md": false 00:04:13.686 }, 00:04:13.686 "memory_domains": [ 00:04:13.686 { 00:04:13.686 "dma_device_id": "system", 00:04:13.686 "dma_device_type": 1 00:04:13.686 }, 00:04:13.686 { 00:04:13.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:13.686 "dma_device_type": 2 00:04:13.686 } 00:04:13.686 ], 00:04:13.686 "driver_specific": {} 00:04:13.686 } 00:04:13.686 ]' 00:04:13.686 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:13.686 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:13.686 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:13.686 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.686 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.686 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.686 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:13.686 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.686 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.687 18:44:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.687 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:13.687 18:44:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:13.687 18:44:57 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:13.687 00:04:13.687 real 0m0.157s 00:04:13.687 user 0m0.096s 00:04:13.687 sys 0m0.027s 00:04:13.687 18:44:57 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.687 18:44:57 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:13.687 ************************************ 00:04:13.687 END TEST rpc_plugins 00:04:13.687 ************************************ 00:04:13.687 18:44:57 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:13.687 18:44:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.687 18:44:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.687 18:44:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.687 ************************************ 00:04:13.687 START TEST rpc_trace_cmd_test 00:04:13.687 ************************************ 00:04:13.687 18:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:13.687 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:13.687 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:13.687 18:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.687 18:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.687 18:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.687 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:13.687 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56894", 00:04:13.687 "tpoint_group_mask": "0x8", 00:04:13.687 "iscsi_conn": { 00:04:13.687 "mask": "0x2", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "scsi": { 00:04:13.687 "mask": "0x4", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "bdev": { 00:04:13.687 "mask": "0x8", 00:04:13.687 "tpoint_mask": "0xffffffffffffffff" 00:04:13.687 }, 00:04:13.687 "nvmf_rdma": { 00:04:13.687 "mask": "0x10", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "nvmf_tcp": { 00:04:13.687 "mask": "0x20", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "ftl": { 00:04:13.687 "mask": "0x40", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "blobfs": { 00:04:13.687 "mask": "0x80", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "dsa": { 00:04:13.687 "mask": "0x200", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "thread": { 00:04:13.687 "mask": "0x400", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "nvme_pcie": { 00:04:13.687 "mask": "0x800", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "iaa": { 00:04:13.687 "mask": "0x1000", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "nvme_tcp": { 00:04:13.687 "mask": "0x2000", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "bdev_nvme": { 00:04:13.687 "mask": "0x4000", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "sock": { 00:04:13.687 "mask": "0x8000", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "blob": { 00:04:13.687 "mask": "0x10000", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "bdev_raid": { 00:04:13.687 "mask": "0x20000", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 }, 00:04:13.687 "scheduler": { 00:04:13.687 "mask": "0x40000", 00:04:13.687 "tpoint_mask": "0x0" 00:04:13.687 } 00:04:13.687 }' 00:04:13.687 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:13.947 00:04:13.947 real 0m0.230s 00:04:13.947 user 0m0.192s 00:04:13.947 sys 0m0.031s 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.947 18:44:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:13.947 ************************************ 00:04:13.947 END TEST rpc_trace_cmd_test 00:04:13.947 ************************************ 00:04:13.947 18:44:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:13.947 18:44:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:13.948 18:44:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:13.948 18:44:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.948 18:44:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.948 18:44:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.948 ************************************ 00:04:13.948 START TEST rpc_daemon_integrity 00:04:13.948 ************************************ 00:04:13.948 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:13.948 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:13.948 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:13.948 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:13.948 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:13.948 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:13.948 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:14.207 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:14.207 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:14.207 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.207 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.207 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.207 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:14.207 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:14.208 { 00:04:14.208 "name": "Malloc2", 00:04:14.208 "aliases": [ 00:04:14.208 "297a48ed-f007-4639-887c-073b2f352ed7" 00:04:14.208 ], 00:04:14.208 "product_name": "Malloc disk", 00:04:14.208 "block_size": 512, 00:04:14.208 "num_blocks": 16384, 00:04:14.208 "uuid": "297a48ed-f007-4639-887c-073b2f352ed7", 00:04:14.208 "assigned_rate_limits": { 00:04:14.208 "rw_ios_per_sec": 0, 00:04:14.208 "rw_mbytes_per_sec": 0, 00:04:14.208 "r_mbytes_per_sec": 0, 00:04:14.208 "w_mbytes_per_sec": 0 00:04:14.208 }, 00:04:14.208 "claimed": false, 00:04:14.208 "zoned": false, 00:04:14.208 "supported_io_types": { 00:04:14.208 "read": true, 00:04:14.208 "write": true, 00:04:14.208 "unmap": true, 00:04:14.208 "flush": true, 00:04:14.208 "reset": true, 00:04:14.208 "nvme_admin": false, 00:04:14.208 "nvme_io": false, 00:04:14.208 "nvme_io_md": false, 00:04:14.208 "write_zeroes": true, 00:04:14.208 "zcopy": true, 00:04:14.208 "get_zone_info": false, 00:04:14.208 "zone_management": false, 00:04:14.208 "zone_append": false, 00:04:14.208 "compare": false, 00:04:14.208 "compare_and_write": false, 00:04:14.208 "abort": true, 00:04:14.208 "seek_hole": false, 00:04:14.208 "seek_data": false, 00:04:14.208 "copy": true, 00:04:14.208 "nvme_iov_md": false 00:04:14.208 }, 00:04:14.208 "memory_domains": [ 00:04:14.208 { 00:04:14.208 "dma_device_id": "system", 00:04:14.208 "dma_device_type": 1 00:04:14.208 }, 00:04:14.208 { 00:04:14.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.208 "dma_device_type": 2 00:04:14.208 } 00:04:14.208 ], 00:04:14.208 "driver_specific": {} 00:04:14.208 } 00:04:14.208 ]' 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.208 [2024-11-16 18:44:57.540994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:14.208 [2024-11-16 18:44:57.541078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:14.208 [2024-11-16 18:44:57.541118] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:14.208 [2024-11-16 18:44:57.541129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:14.208 [2024-11-16 18:44:57.543409] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:14.208 [2024-11-16 18:44:57.543451] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:14.208 Passthru0 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:14.208 { 00:04:14.208 "name": "Malloc2", 00:04:14.208 "aliases": [ 00:04:14.208 "297a48ed-f007-4639-887c-073b2f352ed7" 00:04:14.208 ], 00:04:14.208 "product_name": "Malloc disk", 00:04:14.208 "block_size": 512, 00:04:14.208 "num_blocks": 16384, 00:04:14.208 "uuid": "297a48ed-f007-4639-887c-073b2f352ed7", 00:04:14.208 "assigned_rate_limits": { 00:04:14.208 "rw_ios_per_sec": 0, 00:04:14.208 "rw_mbytes_per_sec": 0, 00:04:14.208 "r_mbytes_per_sec": 0, 00:04:14.208 "w_mbytes_per_sec": 0 00:04:14.208 }, 00:04:14.208 "claimed": true, 00:04:14.208 "claim_type": "exclusive_write", 00:04:14.208 "zoned": false, 00:04:14.208 "supported_io_types": { 00:04:14.208 "read": true, 00:04:14.208 "write": true, 00:04:14.208 "unmap": true, 00:04:14.208 "flush": true, 00:04:14.208 "reset": true, 00:04:14.208 "nvme_admin": false, 00:04:14.208 "nvme_io": false, 00:04:14.208 "nvme_io_md": false, 00:04:14.208 "write_zeroes": true, 00:04:14.208 "zcopy": true, 00:04:14.208 "get_zone_info": false, 00:04:14.208 "zone_management": false, 00:04:14.208 "zone_append": false, 00:04:14.208 "compare": false, 00:04:14.208 "compare_and_write": false, 00:04:14.208 "abort": true, 00:04:14.208 "seek_hole": false, 00:04:14.208 "seek_data": false, 00:04:14.208 "copy": true, 00:04:14.208 "nvme_iov_md": false 00:04:14.208 }, 00:04:14.208 "memory_domains": [ 00:04:14.208 { 00:04:14.208 "dma_device_id": "system", 00:04:14.208 "dma_device_type": 1 00:04:14.208 }, 00:04:14.208 { 00:04:14.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.208 "dma_device_type": 2 00:04:14.208 } 00:04:14.208 ], 00:04:14.208 "driver_specific": {} 00:04:14.208 }, 00:04:14.208 { 00:04:14.208 "name": "Passthru0", 00:04:14.208 "aliases": [ 00:04:14.208 "64c53c73-4074-5bd4-85e6-4ee5ba3cc48a" 00:04:14.208 ], 00:04:14.208 "product_name": "passthru", 00:04:14.208 "block_size": 512, 00:04:14.208 "num_blocks": 16384, 00:04:14.208 "uuid": "64c53c73-4074-5bd4-85e6-4ee5ba3cc48a", 00:04:14.208 "assigned_rate_limits": { 00:04:14.208 "rw_ios_per_sec": 0, 00:04:14.208 "rw_mbytes_per_sec": 0, 00:04:14.208 "r_mbytes_per_sec": 0, 00:04:14.208 "w_mbytes_per_sec": 0 00:04:14.208 }, 00:04:14.208 "claimed": false, 00:04:14.208 "zoned": false, 00:04:14.208 "supported_io_types": { 00:04:14.208 "read": true, 00:04:14.208 "write": true, 00:04:14.208 "unmap": true, 00:04:14.208 "flush": true, 00:04:14.208 "reset": true, 00:04:14.208 "nvme_admin": false, 00:04:14.208 "nvme_io": false, 00:04:14.208 "nvme_io_md": false, 00:04:14.208 "write_zeroes": true, 00:04:14.208 "zcopy": true, 00:04:14.208 "get_zone_info": false, 00:04:14.208 "zone_management": false, 00:04:14.208 "zone_append": false, 00:04:14.208 "compare": false, 00:04:14.208 "compare_and_write": false, 00:04:14.208 "abort": true, 00:04:14.208 "seek_hole": false, 00:04:14.208 "seek_data": false, 00:04:14.208 "copy": true, 00:04:14.208 "nvme_iov_md": false 00:04:14.208 }, 00:04:14.208 "memory_domains": [ 00:04:14.208 { 00:04:14.208 "dma_device_id": "system", 00:04:14.208 "dma_device_type": 1 00:04:14.208 }, 00:04:14.208 { 00:04:14.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:14.208 "dma_device_type": 2 00:04:14.208 } 00:04:14.208 ], 00:04:14.208 "driver_specific": { 00:04:14.208 "passthru": { 00:04:14.208 "name": "Passthru0", 00:04:14.208 "base_bdev_name": "Malloc2" 00:04:14.208 } 00:04:14.208 } 00:04:14.208 } 00:04:14.208 ]' 00:04:14.208 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:14.209 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:14.468 18:44:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:14.468 00:04:14.468 real 0m0.327s 00:04:14.468 user 0m0.191s 00:04:14.468 sys 0m0.053s 00:04:14.468 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.468 18:44:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:14.468 ************************************ 00:04:14.468 END TEST rpc_daemon_integrity 00:04:14.468 ************************************ 00:04:14.468 18:44:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:14.468 18:44:57 rpc -- rpc/rpc.sh@84 -- # killprocess 56894 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 56894 ']' 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@958 -- # kill -0 56894 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@959 -- # uname 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56894 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.468 killing process with pid 56894 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56894' 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@973 -- # kill 56894 00:04:14.468 18:44:57 rpc -- common/autotest_common.sh@978 -- # wait 56894 00:04:17.021 00:04:17.021 real 0m5.110s 00:04:17.021 user 0m5.632s 00:04:17.021 sys 0m0.892s 00:04:17.021 18:45:00 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.021 18:45:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.021 ************************************ 00:04:17.021 END TEST rpc 00:04:17.021 ************************************ 00:04:17.021 18:45:00 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:17.021 18:45:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.021 18:45:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.021 18:45:00 -- common/autotest_common.sh@10 -- # set +x 00:04:17.021 ************************************ 00:04:17.021 START TEST skip_rpc 00:04:17.021 ************************************ 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:17.021 * Looking for test storage... 00:04:17.021 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.021 18:45:00 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.021 --rc genhtml_branch_coverage=1 00:04:17.021 --rc genhtml_function_coverage=1 00:04:17.021 --rc genhtml_legend=1 00:04:17.021 --rc geninfo_all_blocks=1 00:04:17.021 --rc geninfo_unexecuted_blocks=1 00:04:17.021 00:04:17.021 ' 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.021 --rc genhtml_branch_coverage=1 00:04:17.021 --rc genhtml_function_coverage=1 00:04:17.021 --rc genhtml_legend=1 00:04:17.021 --rc geninfo_all_blocks=1 00:04:17.021 --rc geninfo_unexecuted_blocks=1 00:04:17.021 00:04:17.021 ' 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.021 --rc genhtml_branch_coverage=1 00:04:17.021 --rc genhtml_function_coverage=1 00:04:17.021 --rc genhtml_legend=1 00:04:17.021 --rc geninfo_all_blocks=1 00:04:17.021 --rc geninfo_unexecuted_blocks=1 00:04:17.021 00:04:17.021 ' 00:04:17.021 18:45:00 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.021 --rc genhtml_branch_coverage=1 00:04:17.021 --rc genhtml_function_coverage=1 00:04:17.022 --rc genhtml_legend=1 00:04:17.022 --rc geninfo_all_blocks=1 00:04:17.022 --rc geninfo_unexecuted_blocks=1 00:04:17.022 00:04:17.022 ' 00:04:17.022 18:45:00 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:17.022 18:45:00 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:17.022 18:45:00 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:17.022 18:45:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.022 18:45:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.022 18:45:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.022 ************************************ 00:04:17.022 START TEST skip_rpc 00:04:17.022 ************************************ 00:04:17.022 18:45:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:17.022 18:45:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57123 00:04:17.022 18:45:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:17.022 18:45:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.022 18:45:00 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:17.282 [2024-11-16 18:45:00.496780] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:17.282 [2024-11-16 18:45:00.496939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57123 ] 00:04:17.282 [2024-11-16 18:45:00.669033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.542 [2024-11-16 18:45:00.781142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57123 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57123 ']' 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57123 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57123 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.821 killing process with pid 57123 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57123' 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57123 00:04:22.821 18:45:05 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57123 00:04:24.726 00:04:24.726 real 0m7.308s 00:04:24.726 user 0m6.884s 00:04:24.726 sys 0m0.345s 00:04:24.726 18:45:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.726 18:45:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.726 ************************************ 00:04:24.726 END TEST skip_rpc 00:04:24.726 ************************************ 00:04:24.726 18:45:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:24.726 18:45:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.726 18:45:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.726 18:45:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.726 ************************************ 00:04:24.726 START TEST skip_rpc_with_json 00:04:24.726 ************************************ 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57227 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57227 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57227 ']' 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.726 18:45:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:24.726 [2024-11-16 18:45:07.911116] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:24.726 [2024-11-16 18:45:07.911282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57227 ] 00:04:24.726 [2024-11-16 18:45:08.069873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.726 [2024-11-16 18:45:08.177620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.665 [2024-11-16 18:45:09.021370] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:25.665 request: 00:04:25.665 { 00:04:25.665 "trtype": "tcp", 00:04:25.665 "method": "nvmf_get_transports", 00:04:25.665 "req_id": 1 00:04:25.665 } 00:04:25.665 Got JSON-RPC error response 00:04:25.665 response: 00:04:25.665 { 00:04:25.665 "code": -19, 00:04:25.665 "message": "No such device" 00:04:25.665 } 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.665 [2024-11-16 18:45:09.033462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.665 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.923 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.923 18:45:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.923 { 00:04:25.923 "subsystems": [ 00:04:25.923 { 00:04:25.924 "subsystem": "fsdev", 00:04:25.924 "config": [ 00:04:25.924 { 00:04:25.924 "method": "fsdev_set_opts", 00:04:25.924 "params": { 00:04:25.924 "fsdev_io_pool_size": 65535, 00:04:25.924 "fsdev_io_cache_size": 256 00:04:25.924 } 00:04:25.924 } 00:04:25.924 ] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "keyring", 00:04:25.924 "config": [] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "iobuf", 00:04:25.924 "config": [ 00:04:25.924 { 00:04:25.924 "method": "iobuf_set_options", 00:04:25.924 "params": { 00:04:25.924 "small_pool_count": 8192, 00:04:25.924 "large_pool_count": 1024, 00:04:25.924 "small_bufsize": 8192, 00:04:25.924 "large_bufsize": 135168, 00:04:25.924 "enable_numa": false 00:04:25.924 } 00:04:25.924 } 00:04:25.924 ] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "sock", 00:04:25.924 "config": [ 00:04:25.924 { 00:04:25.924 "method": "sock_set_default_impl", 00:04:25.924 "params": { 00:04:25.924 "impl_name": "posix" 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "sock_impl_set_options", 00:04:25.924 "params": { 00:04:25.924 "impl_name": "ssl", 00:04:25.924 "recv_buf_size": 4096, 00:04:25.924 "send_buf_size": 4096, 00:04:25.924 "enable_recv_pipe": true, 00:04:25.924 "enable_quickack": false, 00:04:25.924 "enable_placement_id": 0, 00:04:25.924 "enable_zerocopy_send_server": true, 00:04:25.924 "enable_zerocopy_send_client": false, 00:04:25.924 "zerocopy_threshold": 0, 00:04:25.924 "tls_version": 0, 00:04:25.924 "enable_ktls": false 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "sock_impl_set_options", 00:04:25.924 "params": { 00:04:25.924 "impl_name": "posix", 00:04:25.924 "recv_buf_size": 2097152, 00:04:25.924 "send_buf_size": 2097152, 00:04:25.924 "enable_recv_pipe": true, 00:04:25.924 "enable_quickack": false, 00:04:25.924 "enable_placement_id": 0, 00:04:25.924 "enable_zerocopy_send_server": true, 00:04:25.924 "enable_zerocopy_send_client": false, 00:04:25.924 "zerocopy_threshold": 0, 00:04:25.924 "tls_version": 0, 00:04:25.924 "enable_ktls": false 00:04:25.924 } 00:04:25.924 } 00:04:25.924 ] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "vmd", 00:04:25.924 "config": [] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "accel", 00:04:25.924 "config": [ 00:04:25.924 { 00:04:25.924 "method": "accel_set_options", 00:04:25.924 "params": { 00:04:25.924 "small_cache_size": 128, 00:04:25.924 "large_cache_size": 16, 00:04:25.924 "task_count": 2048, 00:04:25.924 "sequence_count": 2048, 00:04:25.924 "buf_count": 2048 00:04:25.924 } 00:04:25.924 } 00:04:25.924 ] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "bdev", 00:04:25.924 "config": [ 00:04:25.924 { 00:04:25.924 "method": "bdev_set_options", 00:04:25.924 "params": { 00:04:25.924 "bdev_io_pool_size": 65535, 00:04:25.924 "bdev_io_cache_size": 256, 00:04:25.924 "bdev_auto_examine": true, 00:04:25.924 "iobuf_small_cache_size": 128, 00:04:25.924 "iobuf_large_cache_size": 16 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "bdev_raid_set_options", 00:04:25.924 "params": { 00:04:25.924 "process_window_size_kb": 1024, 00:04:25.924 "process_max_bandwidth_mb_sec": 0 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "bdev_iscsi_set_options", 00:04:25.924 "params": { 00:04:25.924 "timeout_sec": 30 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "bdev_nvme_set_options", 00:04:25.924 "params": { 00:04:25.924 "action_on_timeout": "none", 00:04:25.924 "timeout_us": 0, 00:04:25.924 "timeout_admin_us": 0, 00:04:25.924 "keep_alive_timeout_ms": 10000, 00:04:25.924 "arbitration_burst": 0, 00:04:25.924 "low_priority_weight": 0, 00:04:25.924 "medium_priority_weight": 0, 00:04:25.924 "high_priority_weight": 0, 00:04:25.924 "nvme_adminq_poll_period_us": 10000, 00:04:25.924 "nvme_ioq_poll_period_us": 0, 00:04:25.924 "io_queue_requests": 0, 00:04:25.924 "delay_cmd_submit": true, 00:04:25.924 "transport_retry_count": 4, 00:04:25.924 "bdev_retry_count": 3, 00:04:25.924 "transport_ack_timeout": 0, 00:04:25.924 "ctrlr_loss_timeout_sec": 0, 00:04:25.924 "reconnect_delay_sec": 0, 00:04:25.924 "fast_io_fail_timeout_sec": 0, 00:04:25.924 "disable_auto_failback": false, 00:04:25.924 "generate_uuids": false, 00:04:25.924 "transport_tos": 0, 00:04:25.924 "nvme_error_stat": false, 00:04:25.924 "rdma_srq_size": 0, 00:04:25.924 "io_path_stat": false, 00:04:25.924 "allow_accel_sequence": false, 00:04:25.924 "rdma_max_cq_size": 0, 00:04:25.924 "rdma_cm_event_timeout_ms": 0, 00:04:25.924 "dhchap_digests": [ 00:04:25.924 "sha256", 00:04:25.924 "sha384", 00:04:25.924 "sha512" 00:04:25.924 ], 00:04:25.924 "dhchap_dhgroups": [ 00:04:25.924 "null", 00:04:25.924 "ffdhe2048", 00:04:25.924 "ffdhe3072", 00:04:25.924 "ffdhe4096", 00:04:25.924 "ffdhe6144", 00:04:25.924 "ffdhe8192" 00:04:25.924 ] 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "bdev_nvme_set_hotplug", 00:04:25.924 "params": { 00:04:25.924 "period_us": 100000, 00:04:25.924 "enable": false 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "bdev_wait_for_examine" 00:04:25.924 } 00:04:25.924 ] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "scsi", 00:04:25.924 "config": null 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "scheduler", 00:04:25.924 "config": [ 00:04:25.924 { 00:04:25.924 "method": "framework_set_scheduler", 00:04:25.924 "params": { 00:04:25.924 "name": "static" 00:04:25.924 } 00:04:25.924 } 00:04:25.924 ] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "vhost_scsi", 00:04:25.924 "config": [] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "vhost_blk", 00:04:25.924 "config": [] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "ublk", 00:04:25.924 "config": [] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "nbd", 00:04:25.924 "config": [] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "nvmf", 00:04:25.924 "config": [ 00:04:25.924 { 00:04:25.924 "method": "nvmf_set_config", 00:04:25.924 "params": { 00:04:25.924 "discovery_filter": "match_any", 00:04:25.924 "admin_cmd_passthru": { 00:04:25.924 "identify_ctrlr": false 00:04:25.924 }, 00:04:25.924 "dhchap_digests": [ 00:04:25.924 "sha256", 00:04:25.924 "sha384", 00:04:25.924 "sha512" 00:04:25.924 ], 00:04:25.924 "dhchap_dhgroups": [ 00:04:25.924 "null", 00:04:25.924 "ffdhe2048", 00:04:25.924 "ffdhe3072", 00:04:25.924 "ffdhe4096", 00:04:25.924 "ffdhe6144", 00:04:25.924 "ffdhe8192" 00:04:25.924 ] 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "nvmf_set_max_subsystems", 00:04:25.924 "params": { 00:04:25.924 "max_subsystems": 1024 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "nvmf_set_crdt", 00:04:25.924 "params": { 00:04:25.924 "crdt1": 0, 00:04:25.924 "crdt2": 0, 00:04:25.924 "crdt3": 0 00:04:25.924 } 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "method": "nvmf_create_transport", 00:04:25.924 "params": { 00:04:25.924 "trtype": "TCP", 00:04:25.924 "max_queue_depth": 128, 00:04:25.924 "max_io_qpairs_per_ctrlr": 127, 00:04:25.924 "in_capsule_data_size": 4096, 00:04:25.924 "max_io_size": 131072, 00:04:25.924 "io_unit_size": 131072, 00:04:25.924 "max_aq_depth": 128, 00:04:25.924 "num_shared_buffers": 511, 00:04:25.924 "buf_cache_size": 4294967295, 00:04:25.924 "dif_insert_or_strip": false, 00:04:25.924 "zcopy": false, 00:04:25.924 "c2h_success": true, 00:04:25.924 "sock_priority": 0, 00:04:25.924 "abort_timeout_sec": 1, 00:04:25.924 "ack_timeout": 0, 00:04:25.924 "data_wr_pool_size": 0 00:04:25.924 } 00:04:25.924 } 00:04:25.924 ] 00:04:25.924 }, 00:04:25.924 { 00:04:25.924 "subsystem": "iscsi", 00:04:25.924 "config": [ 00:04:25.924 { 00:04:25.924 "method": "iscsi_set_options", 00:04:25.924 "params": { 00:04:25.924 "node_base": "iqn.2016-06.io.spdk", 00:04:25.924 "max_sessions": 128, 00:04:25.924 "max_connections_per_session": 2, 00:04:25.924 "max_queue_depth": 64, 00:04:25.924 "default_time2wait": 2, 00:04:25.924 "default_time2retain": 20, 00:04:25.924 "first_burst_length": 8192, 00:04:25.924 "immediate_data": true, 00:04:25.924 "allow_duplicated_isid": false, 00:04:25.924 "error_recovery_level": 0, 00:04:25.924 "nop_timeout": 60, 00:04:25.924 "nop_in_interval": 30, 00:04:25.924 "disable_chap": false, 00:04:25.924 "require_chap": false, 00:04:25.924 "mutual_chap": false, 00:04:25.924 "chap_group": 0, 00:04:25.924 "max_large_datain_per_connection": 64, 00:04:25.924 "max_r2t_per_connection": 4, 00:04:25.924 "pdu_pool_size": 36864, 00:04:25.924 "immediate_data_pool_size": 16384, 00:04:25.924 "data_out_pool_size": 2048 00:04:25.924 } 00:04:25.924 } 00:04:25.924 ] 00:04:25.924 } 00:04:25.924 ] 00:04:25.924 } 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57227 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57227 ']' 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57227 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57227 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.924 killing process with pid 57227 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57227' 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57227 00:04:25.924 18:45:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57227 00:04:28.464 18:45:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57272 00:04:28.464 18:45:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:28.464 18:45:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57272 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57272 ']' 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57272 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57272 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.798 killing process with pid 57272 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57272' 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57272 00:04:33.798 18:45:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57272 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:35.708 00:04:35.708 real 0m11.081s 00:04:35.708 user 0m10.554s 00:04:35.708 sys 0m0.845s 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.708 ************************************ 00:04:35.708 END TEST skip_rpc_with_json 00:04:35.708 ************************************ 00:04:35.708 18:45:18 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:35.708 18:45:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.708 18:45:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.708 18:45:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.708 ************************************ 00:04:35.708 START TEST skip_rpc_with_delay 00:04:35.708 ************************************ 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:35.708 18:45:18 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:35.708 [2024-11-16 18:45:19.019550] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:35.708 18:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:35.708 18:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.708 18:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.708 18:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.708 00:04:35.708 real 0m0.143s 00:04:35.708 user 0m0.073s 00:04:35.708 sys 0m0.069s 00:04:35.708 18:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.708 18:45:19 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:35.708 ************************************ 00:04:35.708 END TEST skip_rpc_with_delay 00:04:35.708 ************************************ 00:04:35.708 18:45:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:35.708 18:45:19 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:35.708 18:45:19 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:35.708 18:45:19 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.708 18:45:19 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.708 18:45:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.708 ************************************ 00:04:35.708 START TEST exit_on_failed_rpc_init 00:04:35.708 ************************************ 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57411 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57411 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57411 ']' 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.708 18:45:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:35.967 [2024-11-16 18:45:19.224595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:35.967 [2024-11-16 18:45:19.224743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57411 ] 00:04:35.967 [2024-11-16 18:45:19.380725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.227 [2024-11-16 18:45:19.492457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:37.167 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:37.167 [2024-11-16 18:45:20.425014] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:37.167 [2024-11-16 18:45:20.425135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57429 ] 00:04:37.167 [2024-11-16 18:45:20.599972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.427 [2024-11-16 18:45:20.716440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:37.427 [2024-11-16 18:45:20.716551] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:37.427 [2024-11-16 18:45:20.716572] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:37.427 [2024-11-16 18:45:20.716591] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57411 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57411 ']' 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57411 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.687 18:45:20 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57411 00:04:37.687 18:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.688 18:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.688 killing process with pid 57411 00:04:37.688 18:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57411' 00:04:37.688 18:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57411 00:04:37.688 18:45:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57411 00:04:40.227 00:04:40.227 real 0m4.189s 00:04:40.227 user 0m4.504s 00:04:40.227 sys 0m0.551s 00:04:40.227 18:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.227 18:45:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.227 ************************************ 00:04:40.227 END TEST exit_on_failed_rpc_init 00:04:40.227 ************************************ 00:04:40.227 18:45:23 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:40.227 00:04:40.227 real 0m23.212s 00:04:40.227 user 0m22.222s 00:04:40.227 sys 0m2.106s 00:04:40.227 18:45:23 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.227 18:45:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.227 ************************************ 00:04:40.227 END TEST skip_rpc 00:04:40.227 ************************************ 00:04:40.227 18:45:23 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.227 18:45:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.227 18:45:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.227 18:45:23 -- common/autotest_common.sh@10 -- # set +x 00:04:40.227 ************************************ 00:04:40.227 START TEST rpc_client 00:04:40.227 ************************************ 00:04:40.227 18:45:23 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:40.227 * Looking for test storage... 00:04:40.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:40.227 18:45:23 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.227 18:45:23 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.227 18:45:23 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.227 18:45:23 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.227 18:45:23 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.227 18:45:23 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.227 18:45:23 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.227 18:45:23 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.227 18:45:23 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.227 18:45:23 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.228 18:45:23 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:40.228 18:45:23 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.228 18:45:23 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.228 --rc genhtml_branch_coverage=1 00:04:40.228 --rc genhtml_function_coverage=1 00:04:40.228 --rc genhtml_legend=1 00:04:40.228 --rc geninfo_all_blocks=1 00:04:40.228 --rc geninfo_unexecuted_blocks=1 00:04:40.228 00:04:40.228 ' 00:04:40.228 18:45:23 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.228 --rc genhtml_branch_coverage=1 00:04:40.228 --rc genhtml_function_coverage=1 00:04:40.228 --rc genhtml_legend=1 00:04:40.228 --rc geninfo_all_blocks=1 00:04:40.228 --rc geninfo_unexecuted_blocks=1 00:04:40.228 00:04:40.228 ' 00:04:40.228 18:45:23 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.228 --rc genhtml_branch_coverage=1 00:04:40.228 --rc genhtml_function_coverage=1 00:04:40.228 --rc genhtml_legend=1 00:04:40.228 --rc geninfo_all_blocks=1 00:04:40.228 --rc geninfo_unexecuted_blocks=1 00:04:40.228 00:04:40.228 ' 00:04:40.228 18:45:23 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.228 --rc genhtml_branch_coverage=1 00:04:40.228 --rc genhtml_function_coverage=1 00:04:40.228 --rc genhtml_legend=1 00:04:40.228 --rc geninfo_all_blocks=1 00:04:40.228 --rc geninfo_unexecuted_blocks=1 00:04:40.228 00:04:40.228 ' 00:04:40.228 18:45:23 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:40.488 OK 00:04:40.488 18:45:23 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:40.488 00:04:40.488 real 0m0.288s 00:04:40.488 user 0m0.162s 00:04:40.488 sys 0m0.142s 00:04:40.488 18:45:23 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.488 18:45:23 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:40.488 ************************************ 00:04:40.488 END TEST rpc_client 00:04:40.488 ************************************ 00:04:40.488 18:45:23 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.488 18:45:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.488 18:45:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.488 18:45:23 -- common/autotest_common.sh@10 -- # set +x 00:04:40.488 ************************************ 00:04:40.488 START TEST json_config 00:04:40.488 ************************************ 00:04:40.488 18:45:23 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:40.488 18:45:23 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.488 18:45:23 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.488 18:45:23 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:40.488 18:45:23 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:40.488 18:45:23 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.488 18:45:23 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.488 18:45:23 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.488 18:45:23 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.488 18:45:23 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.488 18:45:23 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.488 18:45:23 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.488 18:45:23 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.488 18:45:23 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.748 18:45:23 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.748 18:45:23 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.748 18:45:23 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:40.748 18:45:23 json_config -- scripts/common.sh@345 -- # : 1 00:04:40.748 18:45:23 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.748 18:45:23 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.748 18:45:23 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:40.748 18:45:23 json_config -- scripts/common.sh@353 -- # local d=1 00:04:40.748 18:45:23 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.748 18:45:23 json_config -- scripts/common.sh@355 -- # echo 1 00:04:40.748 18:45:23 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.748 18:45:23 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:40.748 18:45:23 json_config -- scripts/common.sh@353 -- # local d=2 00:04:40.748 18:45:23 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.748 18:45:23 json_config -- scripts/common.sh@355 -- # echo 2 00:04:40.748 18:45:23 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.748 18:45:23 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.749 18:45:23 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.749 18:45:23 json_config -- scripts/common.sh@368 -- # return 0 00:04:40.749 18:45:23 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.749 18:45:23 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.749 --rc genhtml_branch_coverage=1 00:04:40.749 --rc genhtml_function_coverage=1 00:04:40.749 --rc genhtml_legend=1 00:04:40.749 --rc geninfo_all_blocks=1 00:04:40.749 --rc geninfo_unexecuted_blocks=1 00:04:40.749 00:04:40.749 ' 00:04:40.749 18:45:23 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.749 --rc genhtml_branch_coverage=1 00:04:40.749 --rc genhtml_function_coverage=1 00:04:40.749 --rc genhtml_legend=1 00:04:40.749 --rc geninfo_all_blocks=1 00:04:40.749 --rc geninfo_unexecuted_blocks=1 00:04:40.749 00:04:40.749 ' 00:04:40.749 18:45:23 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.749 --rc genhtml_branch_coverage=1 00:04:40.749 --rc genhtml_function_coverage=1 00:04:40.749 --rc genhtml_legend=1 00:04:40.749 --rc geninfo_all_blocks=1 00:04:40.749 --rc geninfo_unexecuted_blocks=1 00:04:40.749 00:04:40.749 ' 00:04:40.749 18:45:23 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:40.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.749 --rc genhtml_branch_coverage=1 00:04:40.749 --rc genhtml_function_coverage=1 00:04:40.749 --rc genhtml_legend=1 00:04:40.749 --rc geninfo_all_blocks=1 00:04:40.749 --rc geninfo_unexecuted_blocks=1 00:04:40.749 00:04:40.749 ' 00:04:40.749 18:45:23 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:40.749 18:45:23 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79960c01-01ef-4d83-be4c-a620e9048765 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=79960c01-01ef-4d83-be4c-a620e9048765 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:40.749 18:45:24 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:40.749 18:45:24 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:40.749 18:45:24 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:40.749 18:45:24 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:40.749 18:45:24 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.749 18:45:24 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.749 18:45:24 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.749 18:45:24 json_config -- paths/export.sh@5 -- # export PATH 00:04:40.749 18:45:24 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@51 -- # : 0 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:40.749 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:40.749 18:45:24 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:40.749 18:45:24 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:40.749 18:45:24 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:40.749 18:45:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:40.749 18:45:24 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:40.749 18:45:24 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:40.749 18:45:24 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:40.749 WARNING: No tests are enabled so not running JSON configuration tests 00:04:40.749 18:45:24 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:40.749 00:04:40.749 real 0m0.228s 00:04:40.749 user 0m0.145s 00:04:40.749 sys 0m0.088s 00:04:40.749 18:45:24 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.749 18:45:24 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.749 ************************************ 00:04:40.749 END TEST json_config 00:04:40.749 ************************************ 00:04:40.749 18:45:24 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:40.749 18:45:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:40.749 18:45:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.749 18:45:24 -- common/autotest_common.sh@10 -- # set +x 00:04:40.749 ************************************ 00:04:40.749 START TEST json_config_extra_key 00:04:40.749 ************************************ 00:04:40.749 18:45:24 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:40.749 18:45:24 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:40.749 18:45:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:40.749 18:45:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.010 18:45:24 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.010 18:45:24 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:41.010 18:45:24 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.010 18:45:24 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.010 --rc genhtml_branch_coverage=1 00:04:41.010 --rc genhtml_function_coverage=1 00:04:41.010 --rc genhtml_legend=1 00:04:41.010 --rc geninfo_all_blocks=1 00:04:41.010 --rc geninfo_unexecuted_blocks=1 00:04:41.010 00:04:41.010 ' 00:04:41.010 18:45:24 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.010 --rc genhtml_branch_coverage=1 00:04:41.010 --rc genhtml_function_coverage=1 00:04:41.010 --rc genhtml_legend=1 00:04:41.010 --rc geninfo_all_blocks=1 00:04:41.010 --rc geninfo_unexecuted_blocks=1 00:04:41.010 00:04:41.010 ' 00:04:41.010 18:45:24 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.010 --rc genhtml_branch_coverage=1 00:04:41.010 --rc genhtml_function_coverage=1 00:04:41.010 --rc genhtml_legend=1 00:04:41.010 --rc geninfo_all_blocks=1 00:04:41.010 --rc geninfo_unexecuted_blocks=1 00:04:41.010 00:04:41.010 ' 00:04:41.011 18:45:24 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.011 --rc genhtml_branch_coverage=1 00:04:41.011 --rc genhtml_function_coverage=1 00:04:41.011 --rc genhtml_legend=1 00:04:41.011 --rc geninfo_all_blocks=1 00:04:41.011 --rc geninfo_unexecuted_blocks=1 00:04:41.011 00:04:41.011 ' 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79960c01-01ef-4d83-be4c-a620e9048765 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=79960c01-01ef-4d83-be4c-a620e9048765 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.011 18:45:24 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.011 18:45:24 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.011 18:45:24 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.011 18:45:24 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.011 18:45:24 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.011 18:45:24 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.011 18:45:24 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.011 18:45:24 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:41.011 18:45:24 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:41.011 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:41.011 18:45:24 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:41.011 INFO: launching applications... 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:41.011 18:45:24 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57639 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:41.011 Waiting for target to run... 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:41.011 18:45:24 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57639 /var/tmp/spdk_tgt.sock 00:04:41.011 18:45:24 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57639 ']' 00:04:41.011 18:45:24 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.011 18:45:24 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.011 18:45:24 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.011 18:45:24 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.011 18:45:24 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.011 [2024-11-16 18:45:24.412105] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:41.011 [2024-11-16 18:45:24.412222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57639 ] 00:04:41.581 [2024-11-16 18:45:24.796104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.581 [2024-11-16 18:45:24.898350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.150 18:45:25 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.150 18:45:25 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:42.150 00:04:42.150 18:45:25 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:42.150 INFO: shutting down applications... 00:04:42.150 18:45:25 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:42.150 18:45:25 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:42.150 18:45:25 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:42.150 18:45:25 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:42.150 18:45:25 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57639 ]] 00:04:42.150 18:45:25 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57639 00:04:42.150 18:45:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:42.150 18:45:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.150 18:45:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57639 00:04:42.150 18:45:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:42.720 18:45:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:42.720 18:45:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:42.720 18:45:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57639 00:04:42.720 18:45:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.290 18:45:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.290 18:45:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.290 18:45:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57639 00:04:43.290 18:45:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:43.860 18:45:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:43.860 18:45:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:43.860 18:45:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57639 00:04:43.860 18:45:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.430 18:45:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.430 18:45:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.430 18:45:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57639 00:04:44.430 18:45:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:44.702 18:45:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:44.702 18:45:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:44.702 18:45:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57639 00:04:44.702 18:45:28 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:45.270 18:45:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:45.270 18:45:28 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:45.270 18:45:28 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57639 00:04:45.270 18:45:28 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:45.270 18:45:28 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:45.270 18:45:28 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:45.270 SPDK target shutdown done 00:04:45.270 18:45:28 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:45.270 Success 00:04:45.270 18:45:28 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:45.270 00:04:45.270 real 0m4.558s 00:04:45.270 user 0m3.938s 00:04:45.270 sys 0m0.565s 00:04:45.270 18:45:28 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.270 18:45:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:45.270 ************************************ 00:04:45.270 END TEST json_config_extra_key 00:04:45.270 ************************************ 00:04:45.270 18:45:28 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.270 18:45:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.270 18:45:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.270 18:45:28 -- common/autotest_common.sh@10 -- # set +x 00:04:45.270 ************************************ 00:04:45.270 START TEST alias_rpc 00:04:45.270 ************************************ 00:04:45.270 18:45:28 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:45.529 * Looking for test storage... 00:04:45.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:45.529 18:45:28 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:45.529 18:45:28 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:45.529 18:45:28 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:45.529 18:45:28 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.529 18:45:28 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:45.529 18:45:28 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.529 18:45:28 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:45.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.529 --rc genhtml_branch_coverage=1 00:04:45.529 --rc genhtml_function_coverage=1 00:04:45.529 --rc genhtml_legend=1 00:04:45.529 --rc geninfo_all_blocks=1 00:04:45.529 --rc geninfo_unexecuted_blocks=1 00:04:45.529 00:04:45.529 ' 00:04:45.529 18:45:28 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:45.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.529 --rc genhtml_branch_coverage=1 00:04:45.529 --rc genhtml_function_coverage=1 00:04:45.529 --rc genhtml_legend=1 00:04:45.529 --rc geninfo_all_blocks=1 00:04:45.529 --rc geninfo_unexecuted_blocks=1 00:04:45.529 00:04:45.529 ' 00:04:45.529 18:45:28 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:45.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.529 --rc genhtml_branch_coverage=1 00:04:45.529 --rc genhtml_function_coverage=1 00:04:45.529 --rc genhtml_legend=1 00:04:45.529 --rc geninfo_all_blocks=1 00:04:45.529 --rc geninfo_unexecuted_blocks=1 00:04:45.529 00:04:45.529 ' 00:04:45.530 18:45:28 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:45.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.530 --rc genhtml_branch_coverage=1 00:04:45.530 --rc genhtml_function_coverage=1 00:04:45.530 --rc genhtml_legend=1 00:04:45.530 --rc geninfo_all_blocks=1 00:04:45.530 --rc geninfo_unexecuted_blocks=1 00:04:45.530 00:04:45.530 ' 00:04:45.530 18:45:28 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:45.530 18:45:28 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.530 18:45:28 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57745 00:04:45.530 18:45:28 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57745 00:04:45.530 18:45:28 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57745 ']' 00:04:45.530 18:45:28 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.530 18:45:28 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.530 18:45:28 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.530 18:45:28 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.530 18:45:28 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.789 [2024-11-16 18:45:29.011015] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:45.789 [2024-11-16 18:45:29.011217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57745 ] 00:04:45.789 [2024-11-16 18:45:29.184289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.048 [2024-11-16 18:45:29.299098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:46.987 18:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:46.987 18:45:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57745 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57745 ']' 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57745 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57745 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57745' 00:04:46.987 killing process with pid 57745 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 57745 00:04:46.987 18:45:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 57745 00:04:49.526 00:04:49.526 real 0m3.920s 00:04:49.526 user 0m3.903s 00:04:49.526 sys 0m0.551s 00:04:49.526 18:45:32 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.526 18:45:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.526 ************************************ 00:04:49.526 END TEST alias_rpc 00:04:49.526 ************************************ 00:04:49.526 18:45:32 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:49.526 18:45:32 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:49.526 18:45:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.526 18:45:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.526 18:45:32 -- common/autotest_common.sh@10 -- # set +x 00:04:49.526 ************************************ 00:04:49.526 START TEST spdkcli_tcp 00:04:49.526 ************************************ 00:04:49.526 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:49.526 * Looking for test storage... 00:04:49.526 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:49.526 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.526 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.526 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.526 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.526 18:45:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.526 18:45:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.527 18:45:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.527 --rc genhtml_branch_coverage=1 00:04:49.527 --rc genhtml_function_coverage=1 00:04:49.527 --rc genhtml_legend=1 00:04:49.527 --rc geninfo_all_blocks=1 00:04:49.527 --rc geninfo_unexecuted_blocks=1 00:04:49.527 00:04:49.527 ' 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.527 --rc genhtml_branch_coverage=1 00:04:49.527 --rc genhtml_function_coverage=1 00:04:49.527 --rc genhtml_legend=1 00:04:49.527 --rc geninfo_all_blocks=1 00:04:49.527 --rc geninfo_unexecuted_blocks=1 00:04:49.527 00:04:49.527 ' 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.527 --rc genhtml_branch_coverage=1 00:04:49.527 --rc genhtml_function_coverage=1 00:04:49.527 --rc genhtml_legend=1 00:04:49.527 --rc geninfo_all_blocks=1 00:04:49.527 --rc geninfo_unexecuted_blocks=1 00:04:49.527 00:04:49.527 ' 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.527 --rc genhtml_branch_coverage=1 00:04:49.527 --rc genhtml_function_coverage=1 00:04:49.527 --rc genhtml_legend=1 00:04:49.527 --rc geninfo_all_blocks=1 00:04:49.527 --rc geninfo_unexecuted_blocks=1 00:04:49.527 00:04:49.527 ' 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57852 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:49.527 18:45:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57852 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57852 ']' 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.527 18:45:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:49.787 [2024-11-16 18:45:33.013456] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:49.787 [2024-11-16 18:45:33.013687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57852 ] 00:04:49.787 [2024-11-16 18:45:33.185266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.046 [2024-11-16 18:45:33.295556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.046 [2024-11-16 18:45:33.295594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.985 18:45:34 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.985 18:45:34 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:50.985 18:45:34 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57869 00:04:50.985 18:45:34 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:50.985 18:45:34 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:50.985 [ 00:04:50.985 "bdev_malloc_delete", 00:04:50.985 "bdev_malloc_create", 00:04:50.985 "bdev_null_resize", 00:04:50.985 "bdev_null_delete", 00:04:50.985 "bdev_null_create", 00:04:50.985 "bdev_nvme_cuse_unregister", 00:04:50.985 "bdev_nvme_cuse_register", 00:04:50.985 "bdev_opal_new_user", 00:04:50.985 "bdev_opal_set_lock_state", 00:04:50.985 "bdev_opal_delete", 00:04:50.985 "bdev_opal_get_info", 00:04:50.985 "bdev_opal_create", 00:04:50.985 "bdev_nvme_opal_revert", 00:04:50.985 "bdev_nvme_opal_init", 00:04:50.985 "bdev_nvme_send_cmd", 00:04:50.985 "bdev_nvme_set_keys", 00:04:50.985 "bdev_nvme_get_path_iostat", 00:04:50.985 "bdev_nvme_get_mdns_discovery_info", 00:04:50.985 "bdev_nvme_stop_mdns_discovery", 00:04:50.985 "bdev_nvme_start_mdns_discovery", 00:04:50.985 "bdev_nvme_set_multipath_policy", 00:04:50.985 "bdev_nvme_set_preferred_path", 00:04:50.985 "bdev_nvme_get_io_paths", 00:04:50.985 "bdev_nvme_remove_error_injection", 00:04:50.985 "bdev_nvme_add_error_injection", 00:04:50.985 "bdev_nvme_get_discovery_info", 00:04:50.985 "bdev_nvme_stop_discovery", 00:04:50.985 "bdev_nvme_start_discovery", 00:04:50.985 "bdev_nvme_get_controller_health_info", 00:04:50.985 "bdev_nvme_disable_controller", 00:04:50.985 "bdev_nvme_enable_controller", 00:04:50.985 "bdev_nvme_reset_controller", 00:04:50.985 "bdev_nvme_get_transport_statistics", 00:04:50.985 "bdev_nvme_apply_firmware", 00:04:50.985 "bdev_nvme_detach_controller", 00:04:50.985 "bdev_nvme_get_controllers", 00:04:50.985 "bdev_nvme_attach_controller", 00:04:50.985 "bdev_nvme_set_hotplug", 00:04:50.985 "bdev_nvme_set_options", 00:04:50.985 "bdev_passthru_delete", 00:04:50.985 "bdev_passthru_create", 00:04:50.985 "bdev_lvol_set_parent_bdev", 00:04:50.985 "bdev_lvol_set_parent", 00:04:50.985 "bdev_lvol_check_shallow_copy", 00:04:50.985 "bdev_lvol_start_shallow_copy", 00:04:50.985 "bdev_lvol_grow_lvstore", 00:04:50.985 "bdev_lvol_get_lvols", 00:04:50.985 "bdev_lvol_get_lvstores", 00:04:50.985 "bdev_lvol_delete", 00:04:50.985 "bdev_lvol_set_read_only", 00:04:50.985 "bdev_lvol_resize", 00:04:50.985 "bdev_lvol_decouple_parent", 00:04:50.985 "bdev_lvol_inflate", 00:04:50.985 "bdev_lvol_rename", 00:04:50.985 "bdev_lvol_clone_bdev", 00:04:50.985 "bdev_lvol_clone", 00:04:50.985 "bdev_lvol_snapshot", 00:04:50.985 "bdev_lvol_create", 00:04:50.985 "bdev_lvol_delete_lvstore", 00:04:50.985 "bdev_lvol_rename_lvstore", 00:04:50.985 "bdev_lvol_create_lvstore", 00:04:50.985 "bdev_raid_set_options", 00:04:50.985 "bdev_raid_remove_base_bdev", 00:04:50.985 "bdev_raid_add_base_bdev", 00:04:50.985 "bdev_raid_delete", 00:04:50.986 "bdev_raid_create", 00:04:50.986 "bdev_raid_get_bdevs", 00:04:50.986 "bdev_error_inject_error", 00:04:50.986 "bdev_error_delete", 00:04:50.986 "bdev_error_create", 00:04:50.986 "bdev_split_delete", 00:04:50.986 "bdev_split_create", 00:04:50.986 "bdev_delay_delete", 00:04:50.986 "bdev_delay_create", 00:04:50.986 "bdev_delay_update_latency", 00:04:50.986 "bdev_zone_block_delete", 00:04:50.986 "bdev_zone_block_create", 00:04:50.986 "blobfs_create", 00:04:50.986 "blobfs_detect", 00:04:50.986 "blobfs_set_cache_size", 00:04:50.986 "bdev_aio_delete", 00:04:50.986 "bdev_aio_rescan", 00:04:50.986 "bdev_aio_create", 00:04:50.986 "bdev_ftl_set_property", 00:04:50.986 "bdev_ftl_get_properties", 00:04:50.986 "bdev_ftl_get_stats", 00:04:50.986 "bdev_ftl_unmap", 00:04:50.986 "bdev_ftl_unload", 00:04:50.986 "bdev_ftl_delete", 00:04:50.986 "bdev_ftl_load", 00:04:50.986 "bdev_ftl_create", 00:04:50.986 "bdev_virtio_attach_controller", 00:04:50.986 "bdev_virtio_scsi_get_devices", 00:04:50.986 "bdev_virtio_detach_controller", 00:04:50.986 "bdev_virtio_blk_set_hotplug", 00:04:50.986 "bdev_iscsi_delete", 00:04:50.986 "bdev_iscsi_create", 00:04:50.986 "bdev_iscsi_set_options", 00:04:50.986 "accel_error_inject_error", 00:04:50.986 "ioat_scan_accel_module", 00:04:50.986 "dsa_scan_accel_module", 00:04:50.986 "iaa_scan_accel_module", 00:04:50.986 "keyring_file_remove_key", 00:04:50.986 "keyring_file_add_key", 00:04:50.986 "keyring_linux_set_options", 00:04:50.986 "fsdev_aio_delete", 00:04:50.986 "fsdev_aio_create", 00:04:50.986 "iscsi_get_histogram", 00:04:50.986 "iscsi_enable_histogram", 00:04:50.986 "iscsi_set_options", 00:04:50.986 "iscsi_get_auth_groups", 00:04:50.986 "iscsi_auth_group_remove_secret", 00:04:50.986 "iscsi_auth_group_add_secret", 00:04:50.986 "iscsi_delete_auth_group", 00:04:50.986 "iscsi_create_auth_group", 00:04:50.986 "iscsi_set_discovery_auth", 00:04:50.986 "iscsi_get_options", 00:04:50.986 "iscsi_target_node_request_logout", 00:04:50.986 "iscsi_target_node_set_redirect", 00:04:50.986 "iscsi_target_node_set_auth", 00:04:50.986 "iscsi_target_node_add_lun", 00:04:50.986 "iscsi_get_stats", 00:04:50.986 "iscsi_get_connections", 00:04:50.986 "iscsi_portal_group_set_auth", 00:04:50.986 "iscsi_start_portal_group", 00:04:50.986 "iscsi_delete_portal_group", 00:04:50.986 "iscsi_create_portal_group", 00:04:50.986 "iscsi_get_portal_groups", 00:04:50.986 "iscsi_delete_target_node", 00:04:50.986 "iscsi_target_node_remove_pg_ig_maps", 00:04:50.986 "iscsi_target_node_add_pg_ig_maps", 00:04:50.986 "iscsi_create_target_node", 00:04:50.986 "iscsi_get_target_nodes", 00:04:50.986 "iscsi_delete_initiator_group", 00:04:50.986 "iscsi_initiator_group_remove_initiators", 00:04:50.986 "iscsi_initiator_group_add_initiators", 00:04:50.986 "iscsi_create_initiator_group", 00:04:50.986 "iscsi_get_initiator_groups", 00:04:50.986 "nvmf_set_crdt", 00:04:50.986 "nvmf_set_config", 00:04:50.986 "nvmf_set_max_subsystems", 00:04:50.986 "nvmf_stop_mdns_prr", 00:04:50.986 "nvmf_publish_mdns_prr", 00:04:50.986 "nvmf_subsystem_get_listeners", 00:04:50.986 "nvmf_subsystem_get_qpairs", 00:04:50.986 "nvmf_subsystem_get_controllers", 00:04:50.986 "nvmf_get_stats", 00:04:50.986 "nvmf_get_transports", 00:04:50.986 "nvmf_create_transport", 00:04:50.986 "nvmf_get_targets", 00:04:50.986 "nvmf_delete_target", 00:04:50.986 "nvmf_create_target", 00:04:50.986 "nvmf_subsystem_allow_any_host", 00:04:50.986 "nvmf_subsystem_set_keys", 00:04:50.986 "nvmf_subsystem_remove_host", 00:04:50.986 "nvmf_subsystem_add_host", 00:04:50.986 "nvmf_ns_remove_host", 00:04:50.986 "nvmf_ns_add_host", 00:04:50.986 "nvmf_subsystem_remove_ns", 00:04:50.986 "nvmf_subsystem_set_ns_ana_group", 00:04:50.986 "nvmf_subsystem_add_ns", 00:04:50.986 "nvmf_subsystem_listener_set_ana_state", 00:04:50.986 "nvmf_discovery_get_referrals", 00:04:50.986 "nvmf_discovery_remove_referral", 00:04:50.986 "nvmf_discovery_add_referral", 00:04:50.986 "nvmf_subsystem_remove_listener", 00:04:50.986 "nvmf_subsystem_add_listener", 00:04:50.986 "nvmf_delete_subsystem", 00:04:50.986 "nvmf_create_subsystem", 00:04:50.986 "nvmf_get_subsystems", 00:04:50.986 "env_dpdk_get_mem_stats", 00:04:50.986 "nbd_get_disks", 00:04:50.986 "nbd_stop_disk", 00:04:50.986 "nbd_start_disk", 00:04:50.986 "ublk_recover_disk", 00:04:50.986 "ublk_get_disks", 00:04:50.986 "ublk_stop_disk", 00:04:50.986 "ublk_start_disk", 00:04:50.986 "ublk_destroy_target", 00:04:50.986 "ublk_create_target", 00:04:50.986 "virtio_blk_create_transport", 00:04:50.986 "virtio_blk_get_transports", 00:04:50.986 "vhost_controller_set_coalescing", 00:04:50.986 "vhost_get_controllers", 00:04:50.986 "vhost_delete_controller", 00:04:50.986 "vhost_create_blk_controller", 00:04:50.986 "vhost_scsi_controller_remove_target", 00:04:50.986 "vhost_scsi_controller_add_target", 00:04:50.986 "vhost_start_scsi_controller", 00:04:50.986 "vhost_create_scsi_controller", 00:04:50.986 "thread_set_cpumask", 00:04:50.986 "scheduler_set_options", 00:04:50.986 "framework_get_governor", 00:04:50.986 "framework_get_scheduler", 00:04:50.986 "framework_set_scheduler", 00:04:50.986 "framework_get_reactors", 00:04:50.986 "thread_get_io_channels", 00:04:50.986 "thread_get_pollers", 00:04:50.986 "thread_get_stats", 00:04:50.986 "framework_monitor_context_switch", 00:04:50.986 "spdk_kill_instance", 00:04:50.986 "log_enable_timestamps", 00:04:50.986 "log_get_flags", 00:04:50.986 "log_clear_flag", 00:04:50.986 "log_set_flag", 00:04:50.986 "log_get_level", 00:04:50.986 "log_set_level", 00:04:50.986 "log_get_print_level", 00:04:50.986 "log_set_print_level", 00:04:50.986 "framework_enable_cpumask_locks", 00:04:50.986 "framework_disable_cpumask_locks", 00:04:50.986 "framework_wait_init", 00:04:50.986 "framework_start_init", 00:04:50.986 "scsi_get_devices", 00:04:50.986 "bdev_get_histogram", 00:04:50.986 "bdev_enable_histogram", 00:04:50.986 "bdev_set_qos_limit", 00:04:50.986 "bdev_set_qd_sampling_period", 00:04:50.986 "bdev_get_bdevs", 00:04:50.986 "bdev_reset_iostat", 00:04:50.986 "bdev_get_iostat", 00:04:50.986 "bdev_examine", 00:04:50.987 "bdev_wait_for_examine", 00:04:50.987 "bdev_set_options", 00:04:50.987 "accel_get_stats", 00:04:50.987 "accel_set_options", 00:04:50.987 "accel_set_driver", 00:04:50.987 "accel_crypto_key_destroy", 00:04:50.987 "accel_crypto_keys_get", 00:04:50.987 "accel_crypto_key_create", 00:04:50.987 "accel_assign_opc", 00:04:50.987 "accel_get_module_info", 00:04:50.987 "accel_get_opc_assignments", 00:04:50.987 "vmd_rescan", 00:04:50.987 "vmd_remove_device", 00:04:50.987 "vmd_enable", 00:04:50.987 "sock_get_default_impl", 00:04:50.987 "sock_set_default_impl", 00:04:50.987 "sock_impl_set_options", 00:04:50.987 "sock_impl_get_options", 00:04:50.987 "iobuf_get_stats", 00:04:50.987 "iobuf_set_options", 00:04:50.987 "keyring_get_keys", 00:04:50.987 "framework_get_pci_devices", 00:04:50.987 "framework_get_config", 00:04:50.987 "framework_get_subsystems", 00:04:50.987 "fsdev_set_opts", 00:04:50.987 "fsdev_get_opts", 00:04:50.987 "trace_get_info", 00:04:50.987 "trace_get_tpoint_group_mask", 00:04:50.987 "trace_disable_tpoint_group", 00:04:50.987 "trace_enable_tpoint_group", 00:04:50.987 "trace_clear_tpoint_mask", 00:04:50.987 "trace_set_tpoint_mask", 00:04:50.987 "notify_get_notifications", 00:04:50.987 "notify_get_types", 00:04:50.987 "spdk_get_version", 00:04:50.987 "rpc_get_methods" 00:04:50.987 ] 00:04:50.987 18:45:34 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.987 18:45:34 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:50.987 18:45:34 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57852 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57852 ']' 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57852 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57852 00:04:50.987 killing process with pid 57852 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57852' 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57852 00:04:50.987 18:45:34 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57852 00:04:53.529 ************************************ 00:04:53.529 END TEST spdkcli_tcp 00:04:53.529 ************************************ 00:04:53.529 00:04:53.529 real 0m3.976s 00:04:53.529 user 0m7.021s 00:04:53.529 sys 0m0.636s 00:04:53.529 18:45:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.529 18:45:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:53.529 18:45:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.529 18:45:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.529 18:45:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.529 18:45:36 -- common/autotest_common.sh@10 -- # set +x 00:04:53.529 ************************************ 00:04:53.529 START TEST dpdk_mem_utility 00:04:53.529 ************************************ 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.529 * Looking for test storage... 00:04:53.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.529 18:45:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.529 --rc genhtml_branch_coverage=1 00:04:53.529 --rc genhtml_function_coverage=1 00:04:53.529 --rc genhtml_legend=1 00:04:53.529 --rc geninfo_all_blocks=1 00:04:53.529 --rc geninfo_unexecuted_blocks=1 00:04:53.529 00:04:53.529 ' 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.529 --rc genhtml_branch_coverage=1 00:04:53.529 --rc genhtml_function_coverage=1 00:04:53.529 --rc genhtml_legend=1 00:04:53.529 --rc geninfo_all_blocks=1 00:04:53.529 --rc geninfo_unexecuted_blocks=1 00:04:53.529 00:04:53.529 ' 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.529 --rc genhtml_branch_coverage=1 00:04:53.529 --rc genhtml_function_coverage=1 00:04:53.529 --rc genhtml_legend=1 00:04:53.529 --rc geninfo_all_blocks=1 00:04:53.529 --rc geninfo_unexecuted_blocks=1 00:04:53.529 00:04:53.529 ' 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.529 --rc genhtml_branch_coverage=1 00:04:53.529 --rc genhtml_function_coverage=1 00:04:53.529 --rc genhtml_legend=1 00:04:53.529 --rc geninfo_all_blocks=1 00:04:53.529 --rc geninfo_unexecuted_blocks=1 00:04:53.529 00:04:53.529 ' 00:04:53.529 18:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:53.529 18:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57974 00:04:53.529 18:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.529 18:45:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57974 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57974 ']' 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:53.529 18:45:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.790 [2024-11-16 18:45:37.037364] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:53.790 [2024-11-16 18:45:37.037579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57974 ] 00:04:53.790 [2024-11-16 18:45:37.209497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.050 [2024-11-16 18:45:37.318861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.992 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.992 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:54.992 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:54.992 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:54.992 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.992 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:54.992 { 00:04:54.992 "filename": "/tmp/spdk_mem_dump.txt" 00:04:54.992 } 00:04:54.992 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.992 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:54.992 DPDK memory size 816.000000 MiB in 1 heap(s) 00:04:54.992 1 heaps totaling size 816.000000 MiB 00:04:54.992 size: 816.000000 MiB heap id: 0 00:04:54.992 end heaps---------- 00:04:54.992 9 mempools totaling size 595.772034 MiB 00:04:54.992 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:54.992 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:54.992 size: 92.545471 MiB name: bdev_io_57974 00:04:54.992 size: 50.003479 MiB name: msgpool_57974 00:04:54.992 size: 36.509338 MiB name: fsdev_io_57974 00:04:54.992 size: 21.763794 MiB name: PDU_Pool 00:04:54.992 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:54.992 size: 4.133484 MiB name: evtpool_57974 00:04:54.992 size: 0.026123 MiB name: Session_Pool 00:04:54.992 end mempools------- 00:04:54.992 6 memzones totaling size 4.142822 MiB 00:04:54.992 size: 1.000366 MiB name: RG_ring_0_57974 00:04:54.992 size: 1.000366 MiB name: RG_ring_1_57974 00:04:54.992 size: 1.000366 MiB name: RG_ring_4_57974 00:04:54.992 size: 1.000366 MiB name: RG_ring_5_57974 00:04:54.992 size: 0.125366 MiB name: RG_ring_2_57974 00:04:54.992 size: 0.015991 MiB name: RG_ring_3_57974 00:04:54.992 end memzones------- 00:04:54.992 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:54.992 heap id: 0 total size: 816.000000 MiB number of busy elements: 319 number of free elements: 18 00:04:54.992 list of free elements. size: 16.790405 MiB 00:04:54.992 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:54.992 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:54.992 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:54.992 element at address: 0x200018d00040 with size: 0.999939 MiB 00:04:54.992 element at address: 0x200019100040 with size: 0.999939 MiB 00:04:54.992 element at address: 0x200019200000 with size: 0.999084 MiB 00:04:54.992 element at address: 0x200031e00000 with size: 0.994324 MiB 00:04:54.992 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:54.992 element at address: 0x200018a00000 with size: 0.959656 MiB 00:04:54.992 element at address: 0x200019500040 with size: 0.936401 MiB 00:04:54.993 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:54.993 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:04:54.993 element at address: 0x200000c00000 with size: 0.490173 MiB 00:04:54.993 element at address: 0x200018e00000 with size: 0.487976 MiB 00:04:54.993 element at address: 0x200019600000 with size: 0.485413 MiB 00:04:54.993 element at address: 0x200012c00000 with size: 0.443237 MiB 00:04:54.993 element at address: 0x200028000000 with size: 0.390442 MiB 00:04:54.993 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:54.993 list of standard malloc elements. size: 199.288696 MiB 00:04:54.993 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:54.993 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:54.993 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:04:54.993 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:04:54.993 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:54.993 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:54.993 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:04:54.993 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:54.993 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:54.993 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:04:54.993 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:54.993 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c71780 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c71880 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c71980 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c72080 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012c72180 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:04:54.993 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:04:54.994 element at address: 0x200028063f40 with size: 0.000244 MiB 00:04:54.994 element at address: 0x200028064040 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806af80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b080 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b180 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b280 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b380 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b480 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b580 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b680 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b780 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b880 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806b980 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806be80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c080 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c180 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c280 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c380 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c480 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c580 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c680 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c780 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c880 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806c980 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d080 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d180 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d280 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d380 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d480 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d580 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d680 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d780 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d880 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806d980 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806da80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806db80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806de80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806df80 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806e080 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806e180 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806e280 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806e380 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806e480 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806e580 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806e680 with size: 0.000244 MiB 00:04:54.994 element at address: 0x20002806e780 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806e880 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806e980 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f080 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f180 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f280 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f380 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f480 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f580 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f680 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f780 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f880 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806f980 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:04:54.995 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:04:54.995 list of memzone associated elements. size: 599.920898 MiB 00:04:54.995 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:04:54.995 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:54.995 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:04:54.995 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:54.995 element at address: 0x200012df4740 with size: 92.045105 MiB 00:04:54.995 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57974_0 00:04:54.995 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:54.995 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57974_0 00:04:54.995 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:54.995 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57974_0 00:04:54.995 element at address: 0x2000197be900 with size: 20.255615 MiB 00:04:54.995 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:54.995 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:04:54.995 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:54.995 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:54.995 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57974_0 00:04:54.995 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:54.995 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57974 00:04:54.995 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:54.995 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57974 00:04:54.995 element at address: 0x200018efde00 with size: 1.008179 MiB 00:04:54.995 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:54.995 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:04:54.995 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:54.995 element at address: 0x200018afde00 with size: 1.008179 MiB 00:04:54.995 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:54.995 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:04:54.995 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:54.995 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:54.995 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57974 00:04:54.995 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:54.995 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57974 00:04:54.995 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:04:54.995 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57974 00:04:54.995 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:04:54.995 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57974 00:04:54.995 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:54.995 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57974 00:04:54.995 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:54.995 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57974 00:04:54.995 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:04:54.995 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:54.995 element at address: 0x200012c72280 with size: 0.500549 MiB 00:04:54.995 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:54.995 element at address: 0x20001967c440 with size: 0.250549 MiB 00:04:54.995 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:54.995 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:54.995 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57974 00:04:54.995 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:54.995 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57974 00:04:54.995 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:04:54.995 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:54.995 element at address: 0x200028064140 with size: 0.023804 MiB 00:04:54.995 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:54.995 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:54.995 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57974 00:04:54.995 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:04:54.995 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:54.995 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:54.995 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57974 00:04:54.995 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:54.995 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57974 00:04:54.995 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:54.995 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57974 00:04:54.995 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:04:54.995 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:54.995 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:54.995 18:45:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57974 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57974 ']' 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57974 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57974 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57974' 00:04:54.995 killing process with pid 57974 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57974 00:04:54.995 18:45:38 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57974 00:04:57.555 00:04:57.555 real 0m3.872s 00:04:57.555 user 0m3.787s 00:04:57.555 sys 0m0.544s 00:04:57.555 18:45:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.555 18:45:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.555 ************************************ 00:04:57.555 END TEST dpdk_mem_utility 00:04:57.555 ************************************ 00:04:57.555 18:45:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.555 18:45:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.555 18:45:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.555 18:45:40 -- common/autotest_common.sh@10 -- # set +x 00:04:57.555 ************************************ 00:04:57.555 START TEST event 00:04:57.555 ************************************ 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:57.555 * Looking for test storage... 00:04:57.555 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.555 18:45:40 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.555 18:45:40 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.555 18:45:40 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.555 18:45:40 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.555 18:45:40 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.555 18:45:40 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.555 18:45:40 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.555 18:45:40 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.555 18:45:40 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.555 18:45:40 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.555 18:45:40 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.555 18:45:40 event -- scripts/common.sh@344 -- # case "$op" in 00:04:57.555 18:45:40 event -- scripts/common.sh@345 -- # : 1 00:04:57.555 18:45:40 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.555 18:45:40 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.555 18:45:40 event -- scripts/common.sh@365 -- # decimal 1 00:04:57.555 18:45:40 event -- scripts/common.sh@353 -- # local d=1 00:04:57.555 18:45:40 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.555 18:45:40 event -- scripts/common.sh@355 -- # echo 1 00:04:57.555 18:45:40 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.555 18:45:40 event -- scripts/common.sh@366 -- # decimal 2 00:04:57.555 18:45:40 event -- scripts/common.sh@353 -- # local d=2 00:04:57.555 18:45:40 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.555 18:45:40 event -- scripts/common.sh@355 -- # echo 2 00:04:57.555 18:45:40 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.555 18:45:40 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.555 18:45:40 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.555 18:45:40 event -- scripts/common.sh@368 -- # return 0 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.555 --rc genhtml_branch_coverage=1 00:04:57.555 --rc genhtml_function_coverage=1 00:04:57.555 --rc genhtml_legend=1 00:04:57.555 --rc geninfo_all_blocks=1 00:04:57.555 --rc geninfo_unexecuted_blocks=1 00:04:57.555 00:04:57.555 ' 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.555 --rc genhtml_branch_coverage=1 00:04:57.555 --rc genhtml_function_coverage=1 00:04:57.555 --rc genhtml_legend=1 00:04:57.555 --rc geninfo_all_blocks=1 00:04:57.555 --rc geninfo_unexecuted_blocks=1 00:04:57.555 00:04:57.555 ' 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.555 --rc genhtml_branch_coverage=1 00:04:57.555 --rc genhtml_function_coverage=1 00:04:57.555 --rc genhtml_legend=1 00:04:57.555 --rc geninfo_all_blocks=1 00:04:57.555 --rc geninfo_unexecuted_blocks=1 00:04:57.555 00:04:57.555 ' 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.555 --rc genhtml_branch_coverage=1 00:04:57.555 --rc genhtml_function_coverage=1 00:04:57.555 --rc genhtml_legend=1 00:04:57.555 --rc geninfo_all_blocks=1 00:04:57.555 --rc geninfo_unexecuted_blocks=1 00:04:57.555 00:04:57.555 ' 00:04:57.555 18:45:40 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:57.555 18:45:40 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:57.555 18:45:40 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:57.555 18:45:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.555 18:45:40 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.555 ************************************ 00:04:57.555 START TEST event_perf 00:04:57.555 ************************************ 00:04:57.555 18:45:40 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:57.555 Running I/O for 1 seconds...[2024-11-16 18:45:40.935099] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:57.555 [2024-11-16 18:45:40.935251] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58077 ] 00:04:57.815 [2024-11-16 18:45:41.111306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.815 [2024-11-16 18:45:41.221397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.815 [2024-11-16 18:45:41.221704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.815 [2024-11-16 18:45:41.221657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.815 Running I/O for 1 seconds...[2024-11-16 18:45:41.221566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.194 00:04:59.194 lcore 0: 210141 00:04:59.194 lcore 1: 210140 00:04:59.194 lcore 2: 210141 00:04:59.194 lcore 3: 210141 00:04:59.194 done. 00:04:59.194 00:04:59.194 real 0m1.570s 00:04:59.194 user 0m4.327s 00:04:59.194 sys 0m0.121s 00:04:59.194 18:45:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.194 18:45:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:59.194 ************************************ 00:04:59.194 END TEST event_perf 00:04:59.194 ************************************ 00:04:59.194 18:45:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.194 18:45:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:59.194 18:45:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.194 18:45:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.194 ************************************ 00:04:59.194 START TEST event_reactor 00:04:59.194 ************************************ 00:04:59.194 18:45:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:59.194 [2024-11-16 18:45:42.583285] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:04:59.194 [2024-11-16 18:45:42.583497] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58116 ] 00:04:59.453 [2024-11-16 18:45:42.759028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.453 [2024-11-16 18:45:42.869781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.835 test_start 00:05:00.835 oneshot 00:05:00.835 tick 100 00:05:00.835 tick 100 00:05:00.835 tick 250 00:05:00.835 tick 100 00:05:00.835 tick 100 00:05:00.835 tick 100 00:05:00.835 tick 250 00:05:00.835 tick 500 00:05:00.835 tick 100 00:05:00.835 tick 100 00:05:00.835 tick 250 00:05:00.835 tick 100 00:05:00.835 tick 100 00:05:00.835 test_end 00:05:00.835 00:05:00.835 real 0m1.549s 00:05:00.835 user 0m1.349s 00:05:00.835 sys 0m0.093s 00:05:00.835 18:45:44 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.835 18:45:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:00.835 ************************************ 00:05:00.835 END TEST event_reactor 00:05:00.835 ************************************ 00:05:00.835 18:45:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.835 18:45:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:00.835 18:45:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.835 18:45:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.835 ************************************ 00:05:00.835 START TEST event_reactor_perf 00:05:00.835 ************************************ 00:05:00.835 18:45:44 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:00.835 [2024-11-16 18:45:44.192275] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:00.835 [2024-11-16 18:45:44.192420] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58153 ] 00:05:01.094 [2024-11-16 18:45:44.364745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.094 [2024-11-16 18:45:44.470699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.475 test_start 00:05:02.475 test_end 00:05:02.475 Performance: 414734 events per second 00:05:02.475 00:05:02.475 real 0m1.548s 00:05:02.475 user 0m1.338s 00:05:02.475 sys 0m0.103s 00:05:02.475 18:45:45 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.475 18:45:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:02.475 ************************************ 00:05:02.475 END TEST event_reactor_perf 00:05:02.475 ************************************ 00:05:02.475 18:45:45 event -- event/event.sh@49 -- # uname -s 00:05:02.475 18:45:45 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:02.475 18:45:45 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.475 18:45:45 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.475 18:45:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.475 18:45:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.475 ************************************ 00:05:02.475 START TEST event_scheduler 00:05:02.475 ************************************ 00:05:02.475 18:45:45 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:02.475 * Looking for test storage... 00:05:02.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:02.475 18:45:45 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:02.475 18:45:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:02.475 18:45:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.735 18:45:45 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:02.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.735 --rc genhtml_branch_coverage=1 00:05:02.735 --rc genhtml_function_coverage=1 00:05:02.735 --rc genhtml_legend=1 00:05:02.735 --rc geninfo_all_blocks=1 00:05:02.735 --rc geninfo_unexecuted_blocks=1 00:05:02.735 00:05:02.735 ' 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:02.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.735 --rc genhtml_branch_coverage=1 00:05:02.735 --rc genhtml_function_coverage=1 00:05:02.735 --rc genhtml_legend=1 00:05:02.735 --rc geninfo_all_blocks=1 00:05:02.735 --rc geninfo_unexecuted_blocks=1 00:05:02.735 00:05:02.735 ' 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:02.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.735 --rc genhtml_branch_coverage=1 00:05:02.735 --rc genhtml_function_coverage=1 00:05:02.735 --rc genhtml_legend=1 00:05:02.735 --rc geninfo_all_blocks=1 00:05:02.735 --rc geninfo_unexecuted_blocks=1 00:05:02.735 00:05:02.735 ' 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:02.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.735 --rc genhtml_branch_coverage=1 00:05:02.735 --rc genhtml_function_coverage=1 00:05:02.735 --rc genhtml_legend=1 00:05:02.735 --rc geninfo_all_blocks=1 00:05:02.735 --rc geninfo_unexecuted_blocks=1 00:05:02.735 00:05:02.735 ' 00:05:02.735 18:45:45 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:02.735 18:45:45 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58229 00:05:02.735 18:45:45 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:02.735 18:45:45 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.735 18:45:45 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58229 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58229 ']' 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.735 18:45:45 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.736 18:45:45 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:02.736 [2024-11-16 18:45:46.071351] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:02.736 [2024-11-16 18:45:46.071552] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58229 ] 00:05:02.996 [2024-11-16 18:45:46.247157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:02.996 [2024-11-16 18:45:46.365322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.996 [2024-11-16 18:45:46.365406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:02.996 [2024-11-16 18:45:46.365563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:02.996 [2024-11-16 18:45:46.365599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.566 18:45:46 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.566 18:45:46 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:03.566 18:45:46 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.566 18:45:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.566 18:45:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.566 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.566 POWER: Cannot set governor of lcore 0 to performance 00:05:03.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.566 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.566 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.566 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.566 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:03.566 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:03.566 POWER: Unable to set Power Management Environment for lcore 0 00:05:03.566 [2024-11-16 18:45:46.898492] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:03.566 [2024-11-16 18:45:46.898537] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:03.566 [2024-11-16 18:45:46.898573] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:03.566 [2024-11-16 18:45:46.898617] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:03.566 [2024-11-16 18:45:46.898644] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:03.566 [2024-11-16 18:45:46.898684] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:03.566 18:45:46 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.566 18:45:46 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.566 18:45:46 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.566 18:45:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.826 [2024-11-16 18:45:47.188212] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.826 18:45:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.826 18:45:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.826 18:45:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.826 18:45:47 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.826 18:45:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.826 ************************************ 00:05:03.826 START TEST scheduler_create_thread 00:05:03.826 ************************************ 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.826 2 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.826 3 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.826 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.826 4 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.827 5 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.827 6 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.827 7 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.827 8 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.827 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.086 9 00:05:04.086 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.086 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:04.086 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.086 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.086 10 00:05:04.086 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.086 18:45:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:04.086 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.086 18:45:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:05.465 18:45:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:05.465 18:45:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:05.465 18:45:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:05.465 18:45:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:05.465 18:45:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.033 18:45:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.033 18:45:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:06.033 18:45:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.033 18:45:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.969 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:06.969 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:06.969 18:45:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:06.969 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.969 18:45:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.964 ************************************ 00:05:07.964 END TEST scheduler_create_thread 00:05:07.964 ************************************ 00:05:07.964 18:45:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.964 00:05:07.964 real 0m3.884s 00:05:07.964 user 0m0.030s 00:05:07.964 sys 0m0.007s 00:05:07.964 18:45:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.964 18:45:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:07.964 18:45:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:07.964 18:45:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58229 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58229 ']' 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58229 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58229 00:05:07.964 killing process with pid 58229 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58229' 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58229 00:05:07.964 18:45:51 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58229 00:05:08.237 [2024-11-16 18:45:51.464500] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:09.175 ************************************ 00:05:09.175 END TEST event_scheduler 00:05:09.175 ************************************ 00:05:09.175 00:05:09.175 real 0m6.811s 00:05:09.175 user 0m14.090s 00:05:09.175 sys 0m0.495s 00:05:09.175 18:45:52 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.175 18:45:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:09.175 18:45:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:09.175 18:45:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:09.175 18:45:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.175 18:45:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.175 18:45:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 ************************************ 00:05:09.434 START TEST app_repeat 00:05:09.434 ************************************ 00:05:09.434 18:45:52 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58350 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.434 Process app_repeat pid: 58350 00:05:09.434 spdk_app_start Round 0 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58350' 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:09.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.434 18:45:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58350 /var/tmp/spdk-nbd.sock 00:05:09.434 18:45:52 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58350 ']' 00:05:09.434 18:45:52 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.434 18:45:52 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.434 18:45:52 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.434 18:45:52 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.434 18:45:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.434 [2024-11-16 18:45:52.712964] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:09.434 [2024-11-16 18:45:52.713134] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58350 ] 00:05:09.434 [2024-11-16 18:45:52.888571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.693 [2024-11-16 18:45:52.994211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.693 [2024-11-16 18:45:52.994245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.263 18:45:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.263 18:45:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:10.263 18:45:53 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.522 Malloc0 00:05:10.522 18:45:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:10.782 Malloc1 00:05:10.782 18:45:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:10.782 18:45:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:11.042 /dev/nbd0 00:05:11.042 18:45:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:11.042 18:45:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.042 1+0 records in 00:05:11.042 1+0 records out 00:05:11.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238302 s, 17.2 MB/s 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:11.042 18:45:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:11.042 18:45:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.042 18:45:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.042 18:45:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:11.302 /dev/nbd1 00:05:11.302 18:45:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:11.302 18:45:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:11.302 1+0 records in 00:05:11.302 1+0 records out 00:05:11.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202642 s, 20.2 MB/s 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:11.302 18:45:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:11.302 18:45:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:11.302 18:45:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:11.302 18:45:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:11.302 18:45:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.302 18:45:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:11.562 { 00:05:11.562 "nbd_device": "/dev/nbd0", 00:05:11.562 "bdev_name": "Malloc0" 00:05:11.562 }, 00:05:11.562 { 00:05:11.562 "nbd_device": "/dev/nbd1", 00:05:11.562 "bdev_name": "Malloc1" 00:05:11.562 } 00:05:11.562 ]' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:11.562 { 00:05:11.562 "nbd_device": "/dev/nbd0", 00:05:11.562 "bdev_name": "Malloc0" 00:05:11.562 }, 00:05:11.562 { 00:05:11.562 "nbd_device": "/dev/nbd1", 00:05:11.562 "bdev_name": "Malloc1" 00:05:11.562 } 00:05:11.562 ]' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:11.562 /dev/nbd1' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:11.562 /dev/nbd1' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:11.562 256+0 records in 00:05:11.562 256+0 records out 00:05:11.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136577 s, 76.8 MB/s 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:11.562 256+0 records in 00:05:11.562 256+0 records out 00:05:11.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02287 s, 45.8 MB/s 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:11.562 256+0 records in 00:05:11.562 256+0 records out 00:05:11.562 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236412 s, 44.4 MB/s 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.562 18:45:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:11.822 18:45:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.082 18:45:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:12.342 18:45:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:12.342 18:45:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:12.601 18:45:56 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:13.980 [2024-11-16 18:45:57.086954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.980 [2024-11-16 18:45:57.186791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.980 [2024-11-16 18:45:57.186795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.980 [2024-11-16 18:45:57.367802] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:13.980 [2024-11-16 18:45:57.367972] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:15.889 spdk_app_start Round 1 00:05:15.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:15.889 18:45:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:15.889 18:45:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:15.889 18:45:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58350 /var/tmp/spdk-nbd.sock 00:05:15.889 18:45:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58350 ']' 00:05:15.889 18:45:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:15.889 18:45:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.889 18:45:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:15.889 18:45:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.889 18:45:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:15.889 18:45:59 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.889 18:45:59 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:15.889 18:45:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.148 Malloc0 00:05:16.148 18:45:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:16.407 Malloc1 00:05:16.407 18:45:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.407 18:45:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:16.666 /dev/nbd0 00:05:16.666 18:45:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:16.666 18:45:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:16.666 18:45:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:16.666 18:45:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.666 18:45:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.666 18:45:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.666 18:45:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:16.666 18:45:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.666 18:45:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.667 18:45:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.667 18:45:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.667 1+0 records in 00:05:16.667 1+0 records out 00:05:16.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559038 s, 7.3 MB/s 00:05:16.667 18:45:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.667 18:45:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.667 18:45:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.667 18:45:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.667 18:45:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.667 18:45:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.667 18:45:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.667 18:45:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:16.926 /dev/nbd1 00:05:16.926 18:46:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:16.926 18:46:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:16.926 1+0 records in 00:05:16.926 1+0 records out 00:05:16.926 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350746 s, 11.7 MB/s 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:16.926 18:46:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:16.926 18:46:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:16.926 18:46:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:16.926 18:46:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:16.926 18:46:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.926 18:46:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:17.188 { 00:05:17.188 "nbd_device": "/dev/nbd0", 00:05:17.188 "bdev_name": "Malloc0" 00:05:17.188 }, 00:05:17.188 { 00:05:17.188 "nbd_device": "/dev/nbd1", 00:05:17.188 "bdev_name": "Malloc1" 00:05:17.188 } 00:05:17.188 ]' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:17.188 { 00:05:17.188 "nbd_device": "/dev/nbd0", 00:05:17.188 "bdev_name": "Malloc0" 00:05:17.188 }, 00:05:17.188 { 00:05:17.188 "nbd_device": "/dev/nbd1", 00:05:17.188 "bdev_name": "Malloc1" 00:05:17.188 } 00:05:17.188 ]' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:17.188 /dev/nbd1' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:17.188 /dev/nbd1' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:17.188 256+0 records in 00:05:17.188 256+0 records out 00:05:17.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145575 s, 72.0 MB/s 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:17.188 256+0 records in 00:05:17.188 256+0 records out 00:05:17.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0241493 s, 43.4 MB/s 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:17.188 256+0 records in 00:05:17.188 256+0 records out 00:05:17.188 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235352 s, 44.6 MB/s 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.188 18:46:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:17.463 18:46:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.736 18:46:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:17.997 18:46:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:17.997 18:46:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:18.256 18:46:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:19.634 [2024-11-16 18:46:02.724692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.634 [2024-11-16 18:46:02.823580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.634 [2024-11-16 18:46:02.823608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.634 [2024-11-16 18:46:03.004719] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:19.634 [2024-11-16 18:46:03.004873] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:21.541 spdk_app_start Round 2 00:05:21.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:21.541 18:46:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:21.541 18:46:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:21.541 18:46:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58350 /var/tmp/spdk-nbd.sock 00:05:21.541 18:46:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58350 ']' 00:05:21.541 18:46:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:21.541 18:46:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.541 18:46:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:21.541 18:46:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.541 18:46:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:21.541 18:46:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.541 18:46:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:21.541 18:46:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:21.800 Malloc0 00:05:21.800 18:46:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.061 Malloc1 00:05:22.061 18:46:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.061 18:46:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:22.321 /dev/nbd0 00:05:22.321 18:46:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:22.321 18:46:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:22.321 18:46:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:22.321 18:46:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.321 18:46:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.321 18:46:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.321 18:46:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:22.321 18:46:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.321 18:46:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.321 18:46:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.322 18:46:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.322 1+0 records in 00:05:22.322 1+0 records out 00:05:22.322 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393038 s, 10.4 MB/s 00:05:22.322 18:46:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.322 18:46:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.322 18:46:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.322 18:46:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.322 18:46:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.322 18:46:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.322 18:46:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.322 18:46:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:22.582 /dev/nbd1 00:05:22.582 18:46:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:22.582 18:46:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:22.582 1+0 records in 00:05:22.582 1+0 records out 00:05:22.582 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000289322 s, 14.2 MB/s 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:22.582 18:46:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:22.582 18:46:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:22.582 18:46:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.582 18:46:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:22.582 18:46:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.582 18:46:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:22.842 { 00:05:22.842 "nbd_device": "/dev/nbd0", 00:05:22.842 "bdev_name": "Malloc0" 00:05:22.842 }, 00:05:22.842 { 00:05:22.842 "nbd_device": "/dev/nbd1", 00:05:22.842 "bdev_name": "Malloc1" 00:05:22.842 } 00:05:22.842 ]' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:22.842 { 00:05:22.842 "nbd_device": "/dev/nbd0", 00:05:22.842 "bdev_name": "Malloc0" 00:05:22.842 }, 00:05:22.842 { 00:05:22.842 "nbd_device": "/dev/nbd1", 00:05:22.842 "bdev_name": "Malloc1" 00:05:22.842 } 00:05:22.842 ]' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:22.842 /dev/nbd1' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:22.842 /dev/nbd1' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:22.842 256+0 records in 00:05:22.842 256+0 records out 00:05:22.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00926777 s, 113 MB/s 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:22.842 256+0 records in 00:05:22.842 256+0 records out 00:05:22.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233184 s, 45.0 MB/s 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:22.842 256+0 records in 00:05:22.842 256+0 records out 00:05:22.842 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233066 s, 45.0 MB/s 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:22.842 18:46:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.102 18:46:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.362 18:46:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:23.621 18:46:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:23.621 18:46:06 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:23.881 18:46:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:25.263 [2024-11-16 18:46:08.428368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.263 [2024-11-16 18:46:08.535362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.263 [2024-11-16 18:46:08.535365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.263 [2024-11-16 18:46:08.714258] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.263 [2024-11-16 18:46:08.714342] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.169 18:46:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58350 /var/tmp/spdk-nbd.sock 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58350 ']' 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:27.169 18:46:10 event.app_repeat -- event/event.sh@39 -- # killprocess 58350 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58350 ']' 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58350 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58350 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58350' 00:05:27.169 killing process with pid 58350 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58350 00:05:27.169 18:46:10 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58350 00:05:28.568 spdk_app_start is called in Round 0. 00:05:28.568 Shutdown signal received, stop current app iteration 00:05:28.568 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:28.568 spdk_app_start is called in Round 1. 00:05:28.568 Shutdown signal received, stop current app iteration 00:05:28.568 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:28.568 spdk_app_start is called in Round 2. 00:05:28.568 Shutdown signal received, stop current app iteration 00:05:28.568 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:05:28.568 spdk_app_start is called in Round 3. 00:05:28.568 Shutdown signal received, stop current app iteration 00:05:28.568 18:46:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:28.568 18:46:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:28.568 00:05:28.568 real 0m18.969s 00:05:28.568 user 0m40.594s 00:05:28.568 sys 0m2.723s 00:05:28.568 18:46:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.568 18:46:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:28.568 ************************************ 00:05:28.568 END TEST app_repeat 00:05:28.568 ************************************ 00:05:28.568 18:46:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:28.568 18:46:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.568 18:46:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.568 18:46:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.568 18:46:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:28.568 ************************************ 00:05:28.568 START TEST cpu_locks 00:05:28.568 ************************************ 00:05:28.568 18:46:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:28.568 * Looking for test storage... 00:05:28.568 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:28.568 18:46:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:28.568 18:46:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:28.568 18:46:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:28.568 18:46:11 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.568 18:46:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:28.568 18:46:11 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.568 18:46:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:28.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.568 --rc genhtml_branch_coverage=1 00:05:28.568 --rc genhtml_function_coverage=1 00:05:28.568 --rc genhtml_legend=1 00:05:28.568 --rc geninfo_all_blocks=1 00:05:28.568 --rc geninfo_unexecuted_blocks=1 00:05:28.568 00:05:28.568 ' 00:05:28.568 18:46:11 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:28.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.568 --rc genhtml_branch_coverage=1 00:05:28.568 --rc genhtml_function_coverage=1 00:05:28.568 --rc genhtml_legend=1 00:05:28.568 --rc geninfo_all_blocks=1 00:05:28.568 --rc geninfo_unexecuted_blocks=1 00:05:28.569 00:05:28.569 ' 00:05:28.569 18:46:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:28.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.569 --rc genhtml_branch_coverage=1 00:05:28.569 --rc genhtml_function_coverage=1 00:05:28.569 --rc genhtml_legend=1 00:05:28.569 --rc geninfo_all_blocks=1 00:05:28.569 --rc geninfo_unexecuted_blocks=1 00:05:28.569 00:05:28.569 ' 00:05:28.569 18:46:11 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:28.569 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.569 --rc genhtml_branch_coverage=1 00:05:28.569 --rc genhtml_function_coverage=1 00:05:28.569 --rc genhtml_legend=1 00:05:28.569 --rc geninfo_all_blocks=1 00:05:28.569 --rc geninfo_unexecuted_blocks=1 00:05:28.569 00:05:28.569 ' 00:05:28.569 18:46:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:28.569 18:46:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:28.569 18:46:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:28.569 18:46:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:28.569 18:46:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.569 18:46:11 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.569 18:46:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.569 ************************************ 00:05:28.569 START TEST default_locks 00:05:28.569 ************************************ 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58793 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58793 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58793 ']' 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.569 18:46:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.569 [2024-11-16 18:46:12.024191] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:28.569 [2024-11-16 18:46:12.024406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58793 ] 00:05:28.829 [2024-11-16 18:46:12.188668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.829 [2024-11-16 18:46:12.300461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.768 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.768 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:29.769 18:46:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58793 00:05:29.769 18:46:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:29.769 18:46:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58793 00:05:30.028 18:46:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58793 00:05:30.028 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58793 ']' 00:05:30.028 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58793 00:05:30.288 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:30.288 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.288 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58793 00:05:30.288 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.288 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.288 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58793' 00:05:30.288 killing process with pid 58793 00:05:30.288 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58793 00:05:30.288 18:46:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58793 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58793 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58793 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:32.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58793 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58793 ']' 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.826 ERROR: process (pid: 58793) is no longer running 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.826 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58793) - No such process 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:32.826 18:46:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:32.827 18:46:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:32.827 18:46:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:32.827 18:46:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:32.827 00:05:32.827 real 0m3.837s 00:05:32.827 user 0m3.785s 00:05:32.827 sys 0m0.620s 00:05:32.827 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.827 18:46:15 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.827 ************************************ 00:05:32.827 END TEST default_locks 00:05:32.827 ************************************ 00:05:32.827 18:46:15 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:32.827 18:46:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.827 18:46:15 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.827 18:46:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.827 ************************************ 00:05:32.827 START TEST default_locks_via_rpc 00:05:32.827 ************************************ 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58863 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58863 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58863 ']' 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.827 18:46:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.827 [2024-11-16 18:46:15.930859] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:32.827 [2024-11-16 18:46:15.930959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58863 ] 00:05:32.827 [2024-11-16 18:46:16.104360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.827 [2024-11-16 18:46:16.215422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58863 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58863 00:05:33.766 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:34.025 18:46:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58863 00:05:34.025 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58863 ']' 00:05:34.025 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58863 00:05:34.026 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:34.026 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.026 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58863 00:05:34.026 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.026 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.026 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58863' 00:05:34.026 killing process with pid 58863 00:05:34.026 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58863 00:05:34.026 18:46:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58863 00:05:36.568 00:05:36.568 real 0m3.698s 00:05:36.568 user 0m3.618s 00:05:36.568 sys 0m0.572s 00:05:36.568 ************************************ 00:05:36.568 END TEST default_locks_via_rpc 00:05:36.568 ************************************ 00:05:36.568 18:46:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.568 18:46:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.568 18:46:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:36.568 18:46:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.568 18:46:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.568 18:46:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.568 ************************************ 00:05:36.568 START TEST non_locking_app_on_locked_coremask 00:05:36.568 ************************************ 00:05:36.568 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:36.568 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58931 00:05:36.568 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.568 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58931 /var/tmp/spdk.sock 00:05:36.568 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58931 ']' 00:05:36.568 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.568 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.569 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.569 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.569 18:46:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.569 [2024-11-16 18:46:19.695242] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:36.569 [2024-11-16 18:46:19.695411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58931 ] 00:05:36.569 [2024-11-16 18:46:19.867968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.569 [2024-11-16 18:46:19.972568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58953 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58953 /var/tmp/spdk2.sock 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58953 ']' 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.506 18:46:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.506 [2024-11-16 18:46:20.853417] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:37.506 [2024-11-16 18:46:20.853625] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:05:37.766 [2024-11-16 18:46:21.022460] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.766 [2024-11-16 18:46:21.022535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.025 [2024-11-16 18:46:21.253301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.957 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.957 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.957 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58931 00:05:39.957 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58931 00:05:39.957 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.217 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58931 00:05:40.217 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58931 ']' 00:05:40.217 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58931 00:05:40.217 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.217 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.217 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58931 00:05:40.477 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.477 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.477 killing process with pid 58931 00:05:40.477 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58931' 00:05:40.477 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58931 00:05:40.477 18:46:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58931 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58953 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58953 ']' 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58953 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58953 00:05:45.759 killing process with pid 58953 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58953' 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58953 00:05:45.759 18:46:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58953 00:05:47.139 00:05:47.139 real 0m10.835s 00:05:47.139 user 0m11.015s 00:05:47.139 sys 0m1.148s 00:05:47.139 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.139 18:46:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.139 ************************************ 00:05:47.139 END TEST non_locking_app_on_locked_coremask 00:05:47.139 ************************************ 00:05:47.139 18:46:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:47.139 18:46:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.139 18:46:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.139 18:46:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.139 ************************************ 00:05:47.139 START TEST locking_app_on_unlocked_coremask 00:05:47.139 ************************************ 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59092 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59092 /var/tmp/spdk.sock 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59092 ']' 00:05:47.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.139 18:46:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.399 [2024-11-16 18:46:30.613192] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:47.399 [2024-11-16 18:46:30.613339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59092 ] 00:05:47.399 [2024-11-16 18:46:30.791036] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.399 [2024-11-16 18:46:30.791151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.659 [2024-11-16 18:46:30.906165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59108 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59108 /var/tmp/spdk2.sock 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59108 ']' 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.598 18:46:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:48.598 [2024-11-16 18:46:31.820622] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:48.598 [2024-11-16 18:46:31.820794] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59108 ] 00:05:48.598 [2024-11-16 18:46:31.984951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.857 [2024-11-16 18:46:32.200522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59108 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59108 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59092 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59092 ']' 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59092 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59092 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:51.394 killing process with pid 59092 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59092' 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59092 00:05:51.394 18:46:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59092 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59108 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59108 ']' 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59108 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59108 00:05:56.695 killing process with pid 59108 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59108' 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59108 00:05:56.695 18:46:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59108 00:05:58.078 ************************************ 00:05:58.078 END TEST locking_app_on_unlocked_coremask 00:05:58.078 ************************************ 00:05:58.078 00:05:58.078 real 0m11.048s 00:05:58.078 user 0m11.265s 00:05:58.078 sys 0m1.135s 00:05:58.078 18:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.078 18:46:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.339 18:46:41 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:58.339 18:46:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.339 18:46:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.339 18:46:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.339 ************************************ 00:05:58.339 START TEST locking_app_on_locked_coremask 00:05:58.339 ************************************ 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59257 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59257 /var/tmp/spdk.sock 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59257 ']' 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.339 18:46:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.339 [2024-11-16 18:46:41.712132] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:58.339 [2024-11-16 18:46:41.712342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59257 ] 00:05:58.598 [2024-11-16 18:46:41.887689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.598 [2024-11-16 18:46:42.001735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.537 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.537 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:59.537 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59275 00:05:59.537 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59275 /var/tmp/spdk2.sock 00:05:59.537 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:59.537 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:59.537 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59275 /var/tmp/spdk2.sock 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:59.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59275 /var/tmp/spdk2.sock 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59275 ']' 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.538 18:46:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.538 [2024-11-16 18:46:42.894186] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:59.538 [2024-11-16 18:46:42.894406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59275 ] 00:05:59.797 [2024-11-16 18:46:43.062051] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59257 has claimed it. 00:05:59.797 [2024-11-16 18:46:43.062123] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:00.057 ERROR: process (pid: 59275) is no longer running 00:06:00.057 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59275) - No such process 00:06:00.057 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.057 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:00.057 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:00.057 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:00.057 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:00.057 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:00.057 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59257 00:06:00.057 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59257 00:06:00.057 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59257 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59257 ']' 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59257 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59257 00:06:00.627 killing process with pid 59257 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59257' 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59257 00:06:00.627 18:46:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59257 00:06:03.167 00:06:03.167 real 0m4.551s 00:06:03.167 user 0m4.679s 00:06:03.167 sys 0m0.772s 00:06:03.167 18:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.167 18:46:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.167 ************************************ 00:06:03.167 END TEST locking_app_on_locked_coremask 00:06:03.167 ************************************ 00:06:03.167 18:46:46 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:03.167 18:46:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.167 18:46:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.167 18:46:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.167 ************************************ 00:06:03.167 START TEST locking_overlapped_coremask 00:06:03.167 ************************************ 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59339 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59339 /var/tmp/spdk.sock 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59339 ']' 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.167 18:46:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.167 [2024-11-16 18:46:46.332109] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:03.167 [2024-11-16 18:46:46.332229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59339 ] 00:06:03.167 [2024-11-16 18:46:46.508094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:03.167 [2024-11-16 18:46:46.623071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.167 [2024-11-16 18:46:46.623196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.167 [2024-11-16 18:46:46.623237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.112 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59359 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59359 /var/tmp/spdk2.sock 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59359 /var/tmp/spdk2.sock 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59359 /var/tmp/spdk2.sock 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59359 ']' 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.113 18:46:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.113 [2024-11-16 18:46:47.535617] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:04.113 [2024-11-16 18:46:47.535850] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59359 ] 00:06:04.373 [2024-11-16 18:46:47.704534] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59339 has claimed it. 00:06:04.373 [2024-11-16 18:46:47.704600] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:04.944 ERROR: process (pid: 59359) is no longer running 00:06:04.944 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59359) - No such process 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59339 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59339 ']' 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59339 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59339 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59339' 00:06:04.944 killing process with pid 59339 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59339 00:06:04.944 18:46:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59339 00:06:07.490 00:06:07.490 real 0m4.328s 00:06:07.490 user 0m11.731s 00:06:07.490 sys 0m0.569s 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.490 ************************************ 00:06:07.490 END TEST locking_overlapped_coremask 00:06:07.490 ************************************ 00:06:07.490 18:46:50 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.490 18:46:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.490 18:46:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.490 18:46:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.490 ************************************ 00:06:07.490 START TEST locking_overlapped_coremask_via_rpc 00:06:07.490 ************************************ 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59429 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59429 /var/tmp/spdk.sock 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59429 ']' 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.490 18:46:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.490 [2024-11-16 18:46:50.722980] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:07.490 [2024-11-16 18:46:50.723198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59429 ] 00:06:07.490 [2024-11-16 18:46:50.884780] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.490 [2024-11-16 18:46:50.884825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.750 [2024-11-16 18:46:51.002111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.750 [2024-11-16 18:46:51.002246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.750 [2024-11-16 18:46:51.002281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59447 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59447 /var/tmp/spdk2.sock 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59447 ']' 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:08.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.689 18:46:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.689 [2024-11-16 18:46:51.921447] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:08.689 [2024-11-16 18:46:51.921640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59447 ] 00:06:08.689 [2024-11-16 18:46:52.089942] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.689 [2024-11-16 18:46:52.089992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.949 [2024-11-16 18:46:52.325613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.949 [2024-11-16 18:46:52.328847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.949 [2024-11-16 18:46:52.328883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.502 [2024-11-16 18:46:54.460870] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59429 has claimed it. 00:06:11.502 request: 00:06:11.502 { 00:06:11.502 "method": "framework_enable_cpumask_locks", 00:06:11.502 "req_id": 1 00:06:11.502 } 00:06:11.502 Got JSON-RPC error response 00:06:11.502 response: 00:06:11.502 { 00:06:11.502 "code": -32603, 00:06:11.502 "message": "Failed to claim CPU core: 2" 00:06:11.502 } 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59429 /var/tmp/spdk.sock 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59429 ']' 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.502 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59447 /var/tmp/spdk2.sock 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59447 ']' 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.503 00:06:11.503 real 0m4.302s 00:06:11.503 user 0m1.239s 00:06:11.503 sys 0m0.206s 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.503 18:46:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.503 ************************************ 00:06:11.503 END TEST locking_overlapped_coremask_via_rpc 00:06:11.503 ************************************ 00:06:11.763 18:46:54 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:11.763 18:46:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59429 ]] 00:06:11.763 18:46:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59429 00:06:11.763 18:46:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59429 ']' 00:06:11.763 18:46:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59429 00:06:11.763 18:46:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:11.763 18:46:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.763 18:46:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59429 00:06:11.763 killing process with pid 59429 00:06:11.763 18:46:55 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.763 18:46:55 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.763 18:46:55 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59429' 00:06:11.763 18:46:55 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59429 00:06:11.763 18:46:55 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59429 00:06:14.306 18:46:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59447 ]] 00:06:14.306 18:46:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59447 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59447 ']' 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59447 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59447 00:06:14.306 killing process with pid 59447 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59447' 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59447 00:06:14.306 18:46:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59447 00:06:16.847 18:46:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.847 18:46:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:16.847 18:46:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59429 ]] 00:06:16.847 Process with pid 59429 is not found 00:06:16.847 18:46:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59429 00:06:16.847 18:46:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59429 ']' 00:06:16.847 18:46:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59429 00:06:16.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59429) - No such process 00:06:16.847 18:46:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59429 is not found' 00:06:16.847 18:46:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59447 ]] 00:06:16.847 18:46:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59447 00:06:16.847 18:46:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59447 ']' 00:06:16.847 18:46:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59447 00:06:16.847 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59447) - No such process 00:06:16.847 Process with pid 59447 is not found 00:06:16.847 18:46:59 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59447 is not found' 00:06:16.847 18:46:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:16.847 00:06:16.847 real 0m48.147s 00:06:16.847 user 1m23.055s 00:06:16.847 sys 0m6.225s 00:06:16.847 18:46:59 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.847 18:46:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.847 ************************************ 00:06:16.847 END TEST cpu_locks 00:06:16.847 ************************************ 00:06:16.847 ************************************ 00:06:16.847 END TEST event 00:06:16.847 ************************************ 00:06:16.847 00:06:16.847 real 1m19.236s 00:06:16.847 user 2m24.985s 00:06:16.847 sys 0m10.170s 00:06:16.847 18:46:59 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.847 18:46:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.847 18:46:59 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.847 18:46:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.847 18:46:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.847 18:46:59 -- common/autotest_common.sh@10 -- # set +x 00:06:16.847 ************************************ 00:06:16.847 START TEST thread 00:06:16.847 ************************************ 00:06:16.847 18:46:59 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:16.847 * Looking for test storage... 00:06:16.847 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.847 18:47:00 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.847 18:47:00 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.847 18:47:00 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.847 18:47:00 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.847 18:47:00 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.847 18:47:00 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.847 18:47:00 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.847 18:47:00 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.847 18:47:00 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.847 18:47:00 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.847 18:47:00 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.847 18:47:00 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:16.847 18:47:00 thread -- scripts/common.sh@345 -- # : 1 00:06:16.847 18:47:00 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.847 18:47:00 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.847 18:47:00 thread -- scripts/common.sh@365 -- # decimal 1 00:06:16.847 18:47:00 thread -- scripts/common.sh@353 -- # local d=1 00:06:16.847 18:47:00 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.847 18:47:00 thread -- scripts/common.sh@355 -- # echo 1 00:06:16.847 18:47:00 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.847 18:47:00 thread -- scripts/common.sh@366 -- # decimal 2 00:06:16.847 18:47:00 thread -- scripts/common.sh@353 -- # local d=2 00:06:16.847 18:47:00 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.847 18:47:00 thread -- scripts/common.sh@355 -- # echo 2 00:06:16.847 18:47:00 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.847 18:47:00 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.847 18:47:00 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.847 18:47:00 thread -- scripts/common.sh@368 -- # return 0 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.847 --rc genhtml_branch_coverage=1 00:06:16.847 --rc genhtml_function_coverage=1 00:06:16.847 --rc genhtml_legend=1 00:06:16.847 --rc geninfo_all_blocks=1 00:06:16.847 --rc geninfo_unexecuted_blocks=1 00:06:16.847 00:06:16.847 ' 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.847 --rc genhtml_branch_coverage=1 00:06:16.847 --rc genhtml_function_coverage=1 00:06:16.847 --rc genhtml_legend=1 00:06:16.847 --rc geninfo_all_blocks=1 00:06:16.847 --rc geninfo_unexecuted_blocks=1 00:06:16.847 00:06:16.847 ' 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.847 --rc genhtml_branch_coverage=1 00:06:16.847 --rc genhtml_function_coverage=1 00:06:16.847 --rc genhtml_legend=1 00:06:16.847 --rc geninfo_all_blocks=1 00:06:16.847 --rc geninfo_unexecuted_blocks=1 00:06:16.847 00:06:16.847 ' 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.847 --rc genhtml_branch_coverage=1 00:06:16.847 --rc genhtml_function_coverage=1 00:06:16.847 --rc genhtml_legend=1 00:06:16.847 --rc geninfo_all_blocks=1 00:06:16.847 --rc geninfo_unexecuted_blocks=1 00:06:16.847 00:06:16.847 ' 00:06:16.847 18:47:00 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.847 18:47:00 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.847 ************************************ 00:06:16.847 START TEST thread_poller_perf 00:06:16.847 ************************************ 00:06:16.847 18:47:00 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:16.847 [2024-11-16 18:47:00.240290] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:16.847 [2024-11-16 18:47:00.240483] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59642 ] 00:06:17.108 [2024-11-16 18:47:00.412427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.108 [2024-11-16 18:47:00.523300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.108 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:18.492 [2024-11-16T18:47:01.964Z] ====================================== 00:06:18.492 [2024-11-16T18:47:01.964Z] busy:2300019446 (cyc) 00:06:18.492 [2024-11-16T18:47:01.964Z] total_run_count: 415000 00:06:18.492 [2024-11-16T18:47:01.964Z] tsc_hz: 2290000000 (cyc) 00:06:18.492 [2024-11-16T18:47:01.964Z] ====================================== 00:06:18.492 [2024-11-16T18:47:01.964Z] poller_cost: 5542 (cyc), 2420 (nsec) 00:06:18.492 00:06:18.492 real 0m1.557s 00:06:18.492 user 0m1.354s 00:06:18.492 sys 0m0.097s 00:06:18.492 18:47:01 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.492 18:47:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:18.492 ************************************ 00:06:18.492 END TEST thread_poller_perf 00:06:18.492 ************************************ 00:06:18.492 18:47:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.492 18:47:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:18.492 18:47:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.492 18:47:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.492 ************************************ 00:06:18.492 START TEST thread_poller_perf 00:06:18.492 ************************************ 00:06:18.492 18:47:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:18.492 [2024-11-16 18:47:01.868111] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:18.492 [2024-11-16 18:47:01.868208] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59678 ] 00:06:18.751 [2024-11-16 18:47:02.041266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.751 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:18.751 [2024-11-16 18:47:02.147993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.131 [2024-11-16T18:47:03.603Z] ====================================== 00:06:20.131 [2024-11-16T18:47:03.603Z] busy:2293133560 (cyc) 00:06:20.131 [2024-11-16T18:47:03.603Z] total_run_count: 5268000 00:06:20.131 [2024-11-16T18:47:03.603Z] tsc_hz: 2290000000 (cyc) 00:06:20.131 [2024-11-16T18:47:03.603Z] ====================================== 00:06:20.131 [2024-11-16T18:47:03.603Z] poller_cost: 435 (cyc), 189 (nsec) 00:06:20.131 00:06:20.131 real 0m1.544s 00:06:20.131 user 0m1.347s 00:06:20.131 sys 0m0.091s 00:06:20.131 ************************************ 00:06:20.131 END TEST thread_poller_perf 00:06:20.131 ************************************ 00:06:20.131 18:47:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.131 18:47:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 18:47:03 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:20.131 ************************************ 00:06:20.131 END TEST thread 00:06:20.131 ************************************ 00:06:20.131 00:06:20.131 real 0m3.468s 00:06:20.131 user 0m2.869s 00:06:20.131 sys 0m0.395s 00:06:20.131 18:47:03 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.131 18:47:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 18:47:03 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:20.131 18:47:03 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:20.131 18:47:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.131 18:47:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.131 18:47:03 -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 ************************************ 00:06:20.131 START TEST app_cmdline 00:06:20.131 ************************************ 00:06:20.131 18:47:03 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:20.392 * Looking for test storage... 00:06:20.392 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:20.392 18:47:03 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:20.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.392 --rc genhtml_branch_coverage=1 00:06:20.392 --rc genhtml_function_coverage=1 00:06:20.392 --rc genhtml_legend=1 00:06:20.392 --rc geninfo_all_blocks=1 00:06:20.392 --rc geninfo_unexecuted_blocks=1 00:06:20.392 00:06:20.392 ' 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:20.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.392 --rc genhtml_branch_coverage=1 00:06:20.392 --rc genhtml_function_coverage=1 00:06:20.392 --rc genhtml_legend=1 00:06:20.392 --rc geninfo_all_blocks=1 00:06:20.392 --rc geninfo_unexecuted_blocks=1 00:06:20.392 00:06:20.392 ' 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:20.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.392 --rc genhtml_branch_coverage=1 00:06:20.392 --rc genhtml_function_coverage=1 00:06:20.392 --rc genhtml_legend=1 00:06:20.392 --rc geninfo_all_blocks=1 00:06:20.392 --rc geninfo_unexecuted_blocks=1 00:06:20.392 00:06:20.392 ' 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:20.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:20.392 --rc genhtml_branch_coverage=1 00:06:20.392 --rc genhtml_function_coverage=1 00:06:20.392 --rc genhtml_legend=1 00:06:20.392 --rc geninfo_all_blocks=1 00:06:20.392 --rc geninfo_unexecuted_blocks=1 00:06:20.392 00:06:20.392 ' 00:06:20.392 18:47:03 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:20.392 18:47:03 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59766 00:06:20.392 18:47:03 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:20.392 18:47:03 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59766 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59766 ']' 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.392 18:47:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.392 [2024-11-16 18:47:03.811183] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:20.392 [2024-11-16 18:47:03.811292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59766 ] 00:06:20.652 [2024-11-16 18:47:03.984633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.652 [2024-11-16 18:47:04.088077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.591 18:47:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.591 18:47:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:21.591 18:47:04 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:21.591 { 00:06:21.591 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:06:21.591 "fields": { 00:06:21.591 "major": 25, 00:06:21.591 "minor": 1, 00:06:21.591 "patch": 0, 00:06:21.591 "suffix": "-pre", 00:06:21.591 "commit": "83e8405e4" 00:06:21.591 } 00:06:21.592 } 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:21.854 request: 00:06:21.854 { 00:06:21.854 "method": "env_dpdk_get_mem_stats", 00:06:21.854 "req_id": 1 00:06:21.854 } 00:06:21.854 Got JSON-RPC error response 00:06:21.854 response: 00:06:21.854 { 00:06:21.854 "code": -32601, 00:06:21.854 "message": "Method not found" 00:06:21.854 } 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:21.854 18:47:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59766 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59766 ']' 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59766 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:21.854 18:47:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.114 18:47:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59766 00:06:22.114 18:47:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.114 18:47:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.114 18:47:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59766' 00:06:22.114 killing process with pid 59766 00:06:22.114 18:47:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 59766 00:06:22.114 18:47:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 59766 00:06:24.653 00:06:24.653 real 0m4.120s 00:06:24.653 user 0m4.286s 00:06:24.653 sys 0m0.595s 00:06:24.653 ************************************ 00:06:24.653 END TEST app_cmdline 00:06:24.653 ************************************ 00:06:24.653 18:47:07 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.653 18:47:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.653 18:47:07 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.653 18:47:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.653 18:47:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.653 18:47:07 -- common/autotest_common.sh@10 -- # set +x 00:06:24.653 ************************************ 00:06:24.653 START TEST version 00:06:24.653 ************************************ 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.653 * Looking for test storage... 00:06:24.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.653 18:47:07 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.653 18:47:07 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.653 18:47:07 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.653 18:47:07 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.653 18:47:07 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.653 18:47:07 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.653 18:47:07 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.653 18:47:07 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.653 18:47:07 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.653 18:47:07 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.653 18:47:07 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.653 18:47:07 version -- scripts/common.sh@344 -- # case "$op" in 00:06:24.653 18:47:07 version -- scripts/common.sh@345 -- # : 1 00:06:24.653 18:47:07 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.653 18:47:07 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.653 18:47:07 version -- scripts/common.sh@365 -- # decimal 1 00:06:24.653 18:47:07 version -- scripts/common.sh@353 -- # local d=1 00:06:24.653 18:47:07 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.653 18:47:07 version -- scripts/common.sh@355 -- # echo 1 00:06:24.653 18:47:07 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.653 18:47:07 version -- scripts/common.sh@366 -- # decimal 2 00:06:24.653 18:47:07 version -- scripts/common.sh@353 -- # local d=2 00:06:24.653 18:47:07 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.653 18:47:07 version -- scripts/common.sh@355 -- # echo 2 00:06:24.653 18:47:07 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.653 18:47:07 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.653 18:47:07 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.653 18:47:07 version -- scripts/common.sh@368 -- # return 0 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.653 --rc genhtml_branch_coverage=1 00:06:24.653 --rc genhtml_function_coverage=1 00:06:24.653 --rc genhtml_legend=1 00:06:24.653 --rc geninfo_all_blocks=1 00:06:24.653 --rc geninfo_unexecuted_blocks=1 00:06:24.653 00:06:24.653 ' 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.653 --rc genhtml_branch_coverage=1 00:06:24.653 --rc genhtml_function_coverage=1 00:06:24.653 --rc genhtml_legend=1 00:06:24.653 --rc geninfo_all_blocks=1 00:06:24.653 --rc geninfo_unexecuted_blocks=1 00:06:24.653 00:06:24.653 ' 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.653 --rc genhtml_branch_coverage=1 00:06:24.653 --rc genhtml_function_coverage=1 00:06:24.653 --rc genhtml_legend=1 00:06:24.653 --rc geninfo_all_blocks=1 00:06:24.653 --rc geninfo_unexecuted_blocks=1 00:06:24.653 00:06:24.653 ' 00:06:24.653 18:47:07 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.653 --rc genhtml_branch_coverage=1 00:06:24.653 --rc genhtml_function_coverage=1 00:06:24.653 --rc genhtml_legend=1 00:06:24.653 --rc geninfo_all_blocks=1 00:06:24.653 --rc geninfo_unexecuted_blocks=1 00:06:24.653 00:06:24.653 ' 00:06:24.653 18:47:07 version -- app/version.sh@17 -- # get_header_version major 00:06:24.653 18:47:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.653 18:47:07 version -- app/version.sh@14 -- # cut -f2 00:06:24.653 18:47:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.653 18:47:07 version -- app/version.sh@17 -- # major=25 00:06:24.653 18:47:07 version -- app/version.sh@18 -- # get_header_version minor 00:06:24.653 18:47:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.653 18:47:07 version -- app/version.sh@14 -- # cut -f2 00:06:24.653 18:47:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.653 18:47:07 version -- app/version.sh@18 -- # minor=1 00:06:24.653 18:47:07 version -- app/version.sh@19 -- # get_header_version patch 00:06:24.653 18:47:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.653 18:47:07 version -- app/version.sh@14 -- # cut -f2 00:06:24.653 18:47:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.653 18:47:07 version -- app/version.sh@19 -- # patch=0 00:06:24.654 18:47:07 version -- app/version.sh@20 -- # get_header_version suffix 00:06:24.654 18:47:07 version -- app/version.sh@14 -- # cut -f2 00:06:24.654 18:47:07 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.654 18:47:07 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.654 18:47:07 version -- app/version.sh@20 -- # suffix=-pre 00:06:24.654 18:47:07 version -- app/version.sh@22 -- # version=25.1 00:06:24.654 18:47:07 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:24.654 18:47:07 version -- app/version.sh@28 -- # version=25.1rc0 00:06:24.654 18:47:07 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:24.654 18:47:07 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:24.654 18:47:07 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:24.654 18:47:07 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:24.654 ************************************ 00:06:24.654 END TEST version 00:06:24.654 ************************************ 00:06:24.654 00:06:24.654 real 0m0.308s 00:06:24.654 user 0m0.193s 00:06:24.654 sys 0m0.167s 00:06:24.654 18:47:07 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.654 18:47:07 version -- common/autotest_common.sh@10 -- # set +x 00:06:24.654 18:47:08 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:24.654 18:47:08 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:06:24.654 18:47:08 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:24.654 18:47:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.654 18:47:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.654 18:47:08 -- common/autotest_common.sh@10 -- # set +x 00:06:24.654 ************************************ 00:06:24.654 START TEST bdev_raid 00:06:24.654 ************************************ 00:06:24.654 18:47:08 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:24.914 * Looking for test storage... 00:06:24.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@345 -- # : 1 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.914 18:47:08 bdev_raid -- scripts/common.sh@368 -- # return 0 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.914 --rc genhtml_branch_coverage=1 00:06:24.914 --rc genhtml_function_coverage=1 00:06:24.914 --rc genhtml_legend=1 00:06:24.914 --rc geninfo_all_blocks=1 00:06:24.914 --rc geninfo_unexecuted_blocks=1 00:06:24.914 00:06:24.914 ' 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.914 --rc genhtml_branch_coverage=1 00:06:24.914 --rc genhtml_function_coverage=1 00:06:24.914 --rc genhtml_legend=1 00:06:24.914 --rc geninfo_all_blocks=1 00:06:24.914 --rc geninfo_unexecuted_blocks=1 00:06:24.914 00:06:24.914 ' 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.914 --rc genhtml_branch_coverage=1 00:06:24.914 --rc genhtml_function_coverage=1 00:06:24.914 --rc genhtml_legend=1 00:06:24.914 --rc geninfo_all_blocks=1 00:06:24.914 --rc geninfo_unexecuted_blocks=1 00:06:24.914 00:06:24.914 ' 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:24.914 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.914 --rc genhtml_branch_coverage=1 00:06:24.914 --rc genhtml_function_coverage=1 00:06:24.914 --rc genhtml_legend=1 00:06:24.914 --rc geninfo_all_blocks=1 00:06:24.914 --rc geninfo_unexecuted_blocks=1 00:06:24.914 00:06:24.914 ' 00:06:24.914 18:47:08 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:24.914 18:47:08 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:24.914 18:47:08 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:06:24.914 18:47:08 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:06:24.914 18:47:08 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:06:24.914 18:47:08 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:06:24.914 18:47:08 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.914 18:47:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:24.914 ************************************ 00:06:24.914 START TEST raid1_resize_data_offset_test 00:06:24.914 ************************************ 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59955 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59955' 00:06:24.914 Process raid pid: 59955 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59955 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59955 ']' 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.914 18:47:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:24.914 [2024-11-16 18:47:08.380491] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:24.915 [2024-11-16 18:47:08.380704] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:25.174 [2024-11-16 18:47:08.555744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.433 [2024-11-16 18:47:08.664262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.433 [2024-11-16 18:47:08.860217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:25.433 [2024-11-16 18:47:08.860342] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.019 malloc0 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.019 malloc1 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.019 null0 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.019 [2024-11-16 18:47:09.368903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:06:26.019 [2024-11-16 18:47:09.370660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:06:26.019 [2024-11-16 18:47:09.370790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:06:26.019 [2024-11-16 18:47:09.370946] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:26.019 [2024-11-16 18:47:09.370962] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:06:26.019 [2024-11-16 18:47:09.371210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:26.019 [2024-11-16 18:47:09.371369] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:26.019 [2024-11-16 18:47:09.371381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:26.019 [2024-11-16 18:47:09.371530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.019 [2024-11-16 18:47:09.428794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.019 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.589 malloc2 00:06:26.589 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.589 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:06:26.589 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.589 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.589 [2024-11-16 18:47:09.965270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:06:26.589 [2024-11-16 18:47:09.981197] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:26.589 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.589 [2024-11-16 18:47:09.982998] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:06:26.589 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:26.589 18:47:09 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:06:26.589 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.589 18:47:09 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59955 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59955 ']' 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59955 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59955 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:26.589 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:26.850 killing process with pid 59955 00:06:26.850 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59955' 00:06:26.850 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59955 00:06:26.850 18:47:10 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59955 00:06:26.850 [2024-11-16 18:47:10.061855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:26.850 [2024-11-16 18:47:10.062132] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:06:26.850 [2024-11-16 18:47:10.062190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:26.850 [2024-11-16 18:47:10.062207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:06:26.850 [2024-11-16 18:47:10.097642] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:26.850 [2024-11-16 18:47:10.097978] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:26.850 [2024-11-16 18:47:10.097995] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:28.759 [2024-11-16 18:47:11.781175] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:29.697 18:47:12 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:06:29.697 00:06:29.697 real 0m4.547s 00:06:29.697 user 0m4.447s 00:06:29.697 sys 0m0.516s 00:06:29.697 ************************************ 00:06:29.697 END TEST raid1_resize_data_offset_test 00:06:29.697 ************************************ 00:06:29.697 18:47:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.697 18:47:12 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.697 18:47:12 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:06:29.697 18:47:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:29.697 18:47:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.697 18:47:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:29.697 ************************************ 00:06:29.697 START TEST raid0_resize_superblock_test 00:06:29.697 ************************************ 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60033 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60033' 00:06:29.697 Process raid pid: 60033 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60033 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60033 ']' 00:06:29.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.697 18:47:12 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.697 [2024-11-16 18:47:13.000368] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:29.697 [2024-11-16 18:47:13.000587] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:29.957 [2024-11-16 18:47:13.176985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.957 [2024-11-16 18:47:13.290718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.217 [2024-11-16 18:47:13.485839] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.217 [2024-11-16 18:47:13.485964] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:30.477 18:47:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:30.477 18:47:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:30.477 18:47:13 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:30.477 18:47:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:30.477 18:47:13 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.044 malloc0 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.044 [2024-11-16 18:47:14.353344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:31.044 [2024-11-16 18:47:14.353475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:31.044 [2024-11-16 18:47:14.353563] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:31.044 [2024-11-16 18:47:14.353612] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:31.044 [2024-11-16 18:47:14.355793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:31.044 [2024-11-16 18:47:14.355878] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:31.044 pt0 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.044 1f1a9a69-fdf3-45c2-85b2-f652504344b3 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.044 416170ca-b542-4688-b944-071f8f701eab 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:31.044 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.045 160b013c-e12c-4f05-b7bb-55f665d45de5 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.045 [2024-11-16 18:47:14.486118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 416170ca-b542-4688-b944-071f8f701eab is claimed 00:06:31.045 [2024-11-16 18:47:14.486198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 160b013c-e12c-4f05-b7bb-55f665d45de5 is claimed 00:06:31.045 [2024-11-16 18:47:14.486322] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:31.045 [2024-11-16 18:47:14.486336] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:06:31.045 [2024-11-16 18:47:14.486568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:31.045 [2024-11-16 18:47:14.486788] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:31.045 [2024-11-16 18:47:14.486801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:31.045 [2024-11-16 18:47:14.486955] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.045 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.304 [2024-11-16 18:47:14.598121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:31.304 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.305 [2024-11-16 18:47:14.626037] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.305 [2024-11-16 18:47:14.626062] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '416170ca-b542-4688-b944-071f8f701eab' was resized: old size 131072, new size 204800 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.305 [2024-11-16 18:47:14.637928] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:31.305 [2024-11-16 18:47:14.637950] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '160b013c-e12c-4f05-b7bb-55f665d45de5' was resized: old size 131072, new size 204800 00:06:31.305 [2024-11-16 18:47:14.637978] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.305 [2024-11-16 18:47:14.737856] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.305 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.564 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:31.564 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:31.564 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:06:31.564 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:31.564 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.564 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.564 [2024-11-16 18:47:14.785584] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:31.564 [2024-11-16 18:47:14.785719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:31.564 [2024-11-16 18:47:14.785753] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:31.564 [2024-11-16 18:47:14.785836] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:31.564 [2024-11-16 18:47:14.785977] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:31.564 [2024-11-16 18:47:14.786051] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:31.564 [2024-11-16 18:47:14.786111] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:31.564 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.564 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.565 [2024-11-16 18:47:14.797513] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:31.565 [2024-11-16 18:47:14.797609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:31.565 [2024-11-16 18:47:14.797658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:31.565 [2024-11-16 18:47:14.797696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:31.565 [2024-11-16 18:47:14.799861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:31.565 [2024-11-16 18:47:14.799953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:31.565 [2024-11-16 18:47:14.801716] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 416170ca-b542-4688-b944-071f8f701eab 00:06:31.565 [2024-11-16 18:47:14.801831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 416170ca-b542-4688-b944-071f8f701eab is claimed 00:06:31.565 [2024-11-16 18:47:14.801998] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 160b013c-e12c-4f05-b7bb-55f665d45de5 00:06:31.565 [2024-11-16 18:47:14.802076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 160b013c-e12c-4f05-b7bb-55f665d45de5 is claimed 00:06:31.565 [2024-11-16 18:47:14.802312] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 160b013c-e12c-4f05-b7bb-55f665d45de5 (2) smaller than existing raid bdev Raid (3) 00:06:31.565 [2024-11-16 18:47:14.802387] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 416170ca-b542-4688-b944-071f8f701eab: File exists 00:06:31.565 [2024-11-16 18:47:14.802478] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:31.565 [2024-11-16 18:47:14.802532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:06:31.565 pt0 00:06:31.565 [2024-11-16 18:47:14.802815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:31.565 [2024-11-16 18:47:14.802978] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:31.565 [2024-11-16 18:47:14.802994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:31.565 [2024-11-16 18:47:14.803148] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:31.565 [2024-11-16 18:47:14.825951] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60033 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60033 ']' 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60033 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60033 00:06:31.565 killing process with pid 60033 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60033' 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60033 00:06:31.565 [2024-11-16 18:47:14.885672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:31.565 [2024-11-16 18:47:14.885723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:31.565 [2024-11-16 18:47:14.885757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:31.565 [2024-11-16 18:47:14.885765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:31.565 18:47:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60033 00:06:32.946 [2024-11-16 18:47:16.226601] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:33.884 ************************************ 00:06:33.884 END TEST raid0_resize_superblock_test 00:06:33.884 ************************************ 00:06:33.884 18:47:17 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:33.884 00:06:33.884 real 0m4.359s 00:06:33.884 user 0m4.516s 00:06:33.884 sys 0m0.563s 00:06:33.884 18:47:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.884 18:47:17 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:33.884 18:47:17 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:06:33.884 18:47:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.884 18:47:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.884 18:47:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:33.884 ************************************ 00:06:33.884 START TEST raid1_resize_superblock_test 00:06:33.884 ************************************ 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60129 00:06:33.884 Process raid pid: 60129 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60129' 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60129 00:06:33.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60129 ']' 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.884 18:47:17 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:34.146 [2024-11-16 18:47:17.427301] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:34.146 [2024-11-16 18:47:17.427416] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.146 [2024-11-16 18:47:17.600799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.405 [2024-11-16 18:47:17.707553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.665 [2024-11-16 18:47:17.896713] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.665 [2024-11-16 18:47:17.896747] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:34.926 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.926 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:06:34.926 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:06:34.926 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.926 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 malloc0 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 [2024-11-16 18:47:18.783472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:35.496 [2024-11-16 18:47:18.783536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:35.496 [2024-11-16 18:47:18.783558] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:06:35.496 [2024-11-16 18:47:18.783568] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:35.496 [2024-11-16 18:47:18.785627] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:35.496 [2024-11-16 18:47:18.785732] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:35.496 pt0 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 d36d4941-d41b-4d54-9fb6-63fc9fbc2bb8 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 65631d3c-aa30-483f-ba0e-4c965db060d7 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 25a09665-3c29-49bb-bf0d-28a93ddbe1e7 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 [2024-11-16 18:47:18.915297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 65631d3c-aa30-483f-ba0e-4c965db060d7 is claimed 00:06:35.496 [2024-11-16 18:47:18.915372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 25a09665-3c29-49bb-bf0d-28a93ddbe1e7 is claimed 00:06:35.496 [2024-11-16 18:47:18.915485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:35.496 [2024-11-16 18:47:18.915498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:06:35.496 [2024-11-16 18:47:18.915749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:35.496 [2024-11-16 18:47:18.915954] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:35.496 [2024-11-16 18:47:18.915966] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:35.496 [2024-11-16 18:47:18.916121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.496 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.757 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:06:35.757 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:06:35.757 18:47:18 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:35.757 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.757 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.757 18:47:18 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:06:35.757 [2024-11-16 18:47:19.023321] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.757 [2024-11-16 18:47:19.075145] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.757 [2024-11-16 18:47:19.075168] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '65631d3c-aa30-483f-ba0e-4c965db060d7' was resized: old size 131072, new size 204800 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.757 [2024-11-16 18:47:19.087085] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:35.757 [2024-11-16 18:47:19.087105] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '25a09665-3c29-49bb-bf0d-28a93ddbe1e7' was resized: old size 131072, new size 204800 00:06:35.757 [2024-11-16 18:47:19.087149] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:06:35.757 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:06:35.758 [2024-11-16 18:47:19.195035] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:35.758 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 [2024-11-16 18:47:19.246739] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:06:36.017 [2024-11-16 18:47:19.246804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:06:36.017 [2024-11-16 18:47:19.246828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:06:36.017 [2024-11-16 18:47:19.246953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:36.017 [2024-11-16 18:47:19.247119] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:36.017 [2024-11-16 18:47:19.247181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:36.017 [2024-11-16 18:47:19.247193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.017 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.017 [2024-11-16 18:47:19.258678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:06:36.017 [2024-11-16 18:47:19.258726] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:36.017 [2024-11-16 18:47:19.258746] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:06:36.017 [2024-11-16 18:47:19.258758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:36.017 [2024-11-16 18:47:19.260881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:36.017 [2024-11-16 18:47:19.260976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:06:36.018 [2024-11-16 18:47:19.262702] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 65631d3c-aa30-483f-ba0e-4c965db060d7 00:06:36.018 [2024-11-16 18:47:19.262779] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 65631d3c-aa30-483f-ba0e-4c965db060d7 is claimed 00:06:36.018 [2024-11-16 18:47:19.262902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 25a09665-3c29-49bb-bf0d-28a93ddbe1e7 00:06:36.018 [2024-11-16 18:47:19.262921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 25a09665-3c29-49bb-bf0d-28a93ddbe1e7 is claimed 00:06:36.018 [2024-11-16 18:47:19.263060] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 25a09665-3c29-49bb-bf0d-28a93ddbe1e7 (2) smaller than existing raid bdev Raid (3) 00:06:36.018 [2024-11-16 18:47:19.263080] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 65631d3c-aa30-483f-ba0e-4c965db060d7: File exists 00:06:36.018 [2024-11-16 18:47:19.263113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:06:36.018 [2024-11-16 18:47:19.263124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:06:36.018 [2024-11-16 18:47:19.263368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:06:36.018 [2024-11-16 18:47:19.263522] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:06:36.018 [2024-11-16 18:47:19.263530] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:06:36.018 pt0 00:06:36.018 [2024-11-16 18:47:19.263718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:06:36.018 [2024-11-16 18:47:19.282976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60129 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60129 ']' 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60129 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60129 00:06:36.018 killing process with pid 60129 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60129' 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60129 00:06:36.018 [2024-11-16 18:47:19.369175] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:36.018 [2024-11-16 18:47:19.369232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:36.018 [2024-11-16 18:47:19.369276] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:36.018 [2024-11-16 18:47:19.369283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:06:36.018 18:47:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60129 00:06:37.401 [2024-11-16 18:47:20.714836] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:38.341 18:47:21 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:06:38.341 00:06:38.341 real 0m4.423s 00:06:38.341 user 0m4.615s 00:06:38.341 sys 0m0.563s 00:06:38.341 18:47:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.341 18:47:21 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:06:38.341 ************************************ 00:06:38.341 END TEST raid1_resize_superblock_test 00:06:38.341 ************************************ 00:06:38.602 18:47:21 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:06:38.602 18:47:21 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:06:38.602 18:47:21 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:06:38.602 18:47:21 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:06:38.602 18:47:21 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:06:38.602 18:47:21 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:06:38.602 18:47:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.602 18:47:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.602 18:47:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:38.602 ************************************ 00:06:38.602 START TEST raid_function_test_raid0 00:06:38.602 ************************************ 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60234 00:06:38.602 Process raid pid: 60234 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60234' 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60234 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60234 ']' 00:06:38.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.602 18:47:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:38.602 [2024-11-16 18:47:21.944161] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:38.602 [2024-11-16 18:47:21.944273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:38.861 [2024-11-16 18:47:22.116144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.861 [2024-11-16 18:47:22.224478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.122 [2024-11-16 18:47:22.422911] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.122 [2024-11-16 18:47:22.422945] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:39.382 Base_1 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:39.382 Base_2 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:39.382 [2024-11-16 18:47:22.845165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:39.382 [2024-11-16 18:47:22.847029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:39.382 [2024-11-16 18:47:22.847107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:39.382 [2024-11-16 18:47:22.847118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:39.382 [2024-11-16 18:47:22.847383] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:39.382 [2024-11-16 18:47:22.847526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:39.382 [2024-11-16 18:47:22.847535] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:39.382 [2024-11-16 18:47:22.847695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:39.382 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:39.641 18:47:22 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:39.641 [2024-11-16 18:47:23.068810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:39.642 /dev/nbd0 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:39.642 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:39.901 1+0 records in 00:06:39.901 1+0 records out 00:06:39.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247285 s, 16.6 MB/s 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.901 { 00:06:39.901 "nbd_device": "/dev/nbd0", 00:06:39.901 "bdev_name": "raid" 00:06:39.901 } 00:06:39.901 ]' 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.901 { 00:06:39.901 "nbd_device": "/dev/nbd0", 00:06:39.901 "bdev_name": "raid" 00:06:39.901 } 00:06:39.901 ]' 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:39.901 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:40.161 4096+0 records in 00:06:40.161 4096+0 records out 00:06:40.161 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0243679 s, 86.1 MB/s 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:40.161 4096+0 records in 00:06:40.161 4096+0 records out 00:06:40.161 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.213955 s, 9.8 MB/s 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:40.161 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:40.421 128+0 records in 00:06:40.421 128+0 records out 00:06:40.421 65536 bytes (66 kB, 64 KiB) copied, 0.00130847 s, 50.1 MB/s 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:40.421 2035+0 records in 00:06:40.421 2035+0 records out 00:06:40.421 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0109325 s, 95.3 MB/s 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:40.421 456+0 records in 00:06:40.421 456+0 records out 00:06:40.421 233472 bytes (233 kB, 228 KiB) copied, 0.00406991 s, 57.4 MB/s 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:40.421 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.422 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.681 [2024-11-16 18:47:23.942507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:40.681 18:47:23 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:40.681 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.681 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.681 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60234 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60234 ']' 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60234 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60234 00:06:40.941 killing process with pid 60234 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60234' 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60234 00:06:40.941 [2024-11-16 18:47:24.228183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:40.941 [2024-11-16 18:47:24.228283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:40.941 [2024-11-16 18:47:24.228330] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:40.941 [2024-11-16 18:47:24.228345] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:06:40.941 18:47:24 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60234 00:06:41.201 [2024-11-16 18:47:24.429691] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:42.141 18:47:25 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:06:42.141 00:06:42.141 real 0m3.614s 00:06:42.141 user 0m4.157s 00:06:42.141 sys 0m0.890s 00:06:42.141 18:47:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.141 18:47:25 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:06:42.141 ************************************ 00:06:42.141 END TEST raid_function_test_raid0 00:06:42.141 ************************************ 00:06:42.141 18:47:25 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:06:42.141 18:47:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:42.141 18:47:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.141 18:47:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:42.141 ************************************ 00:06:42.141 START TEST raid_function_test_concat 00:06:42.141 ************************************ 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60352 00:06:42.141 Process raid pid: 60352 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60352' 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60352 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60352 ']' 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.141 18:47:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:42.401 [2024-11-16 18:47:25.630738] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:42.401 [2024-11-16 18:47:25.630847] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.401 [2024-11-16 18:47:25.807558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.661 [2024-11-16 18:47:25.918776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.661 [2024-11-16 18:47:26.118042] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:42.661 [2024-11-16 18:47:26.118103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:43.231 Base_1 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:43.231 Base_2 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:43.231 [2024-11-16 18:47:26.537193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:43.231 [2024-11-16 18:47:26.538954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:43.231 [2024-11-16 18:47:26.539041] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:43.231 [2024-11-16 18:47:26.539055] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:43.231 [2024-11-16 18:47:26.539326] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:43.231 [2024-11-16 18:47:26.539486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:43.231 [2024-11-16 18:47:26.539504] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:06:43.231 [2024-11-16 18:47:26.539658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:43.231 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:06:43.491 [2024-11-16 18:47:26.780845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:06:43.491 /dev/nbd0 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.491 1+0 records in 00:06:43.491 1+0 records out 00:06:43.491 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000425902 s, 9.6 MB/s 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:43.491 18:47:26 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:43.751 { 00:06:43.751 "nbd_device": "/dev/nbd0", 00:06:43.751 "bdev_name": "raid" 00:06:43.751 } 00:06:43.751 ]' 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:43.751 { 00:06:43.751 "nbd_device": "/dev/nbd0", 00:06:43.751 "bdev_name": "raid" 00:06:43.751 } 00:06:43.751 ]' 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:06:43.751 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:06:43.752 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:06:43.752 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:06:43.752 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:06:43.752 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:06:43.752 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:06:43.752 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:06:43.752 4096+0 records in 00:06:43.752 4096+0 records out 00:06:43.752 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0324794 s, 64.6 MB/s 00:06:43.752 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:06:44.011 4096+0 records in 00:06:44.011 4096+0 records out 00:06:44.011 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.186113 s, 11.3 MB/s 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:06:44.011 128+0 records in 00:06:44.011 128+0 records out 00:06:44.011 65536 bytes (66 kB, 64 KiB) copied, 0.00110587 s, 59.3 MB/s 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:06:44.011 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:06:44.012 2035+0 records in 00:06:44.012 2035+0 records out 00:06:44.012 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0149539 s, 69.7 MB/s 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:06:44.012 456+0 records in 00:06:44.012 456+0 records out 00:06:44.012 233472 bytes (233 kB, 228 KiB) copied, 0.00274519 s, 85.0 MB/s 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.012 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:44.272 [2024-11-16 18:47:27.682121] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:06:44.272 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60352 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60352 ']' 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60352 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60352 00:06:44.532 killing process with pid 60352 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60352' 00:06:44.532 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60352 00:06:44.532 [2024-11-16 18:47:27.972175] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:44.532 [2024-11-16 18:47:27.972287] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:44.532 [2024-11-16 18:47:27.972339] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:44.532 [2024-11-16 18:47:27.972352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state off 18:47:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60352 00:06:44.532 line 00:06:44.792 [2024-11-16 18:47:28.166923] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:46.204 ************************************ 00:06:46.204 END TEST raid_function_test_concat 00:06:46.204 ************************************ 00:06:46.204 18:47:29 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:06:46.204 00:06:46.204 real 0m3.669s 00:06:46.204 user 0m4.234s 00:06:46.204 sys 0m0.952s 00:06:46.204 18:47:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.204 18:47:29 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:06:46.204 18:47:29 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:06:46.204 18:47:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:46.204 18:47:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.204 18:47:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:46.204 ************************************ 00:06:46.204 START TEST raid0_resize_test 00:06:46.204 ************************************ 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60479 00:06:46.204 Process raid pid: 60479 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60479' 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60479 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60479 ']' 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.204 18:47:29 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.204 [2024-11-16 18:47:29.373214] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:46.204 [2024-11-16 18:47:29.373363] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.204 [2024-11-16 18:47:29.549509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.204 [2024-11-16 18:47:29.651753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.464 [2024-11-16 18:47:29.848110] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.464 [2024-11-16 18:47:29.848149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:46.723 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.723 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:46.723 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:46.723 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.723 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.983 Base_1 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.983 Base_2 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.983 [2024-11-16 18:47:30.223354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:46.983 [2024-11-16 18:47:30.225090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:46.983 [2024-11-16 18:47:30.225175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:46.983 [2024-11-16 18:47:30.225186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:46.983 [2024-11-16 18:47:30.225433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:46.983 [2024-11-16 18:47:30.225568] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:46.983 [2024-11-16 18:47:30.225579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:46.983 [2024-11-16 18:47:30.225763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.983 [2024-11-16 18:47:30.235300] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:46.983 [2024-11-16 18:47:30.235331] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:46.983 true 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.983 [2024-11-16 18:47:30.251423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.983 [2024-11-16 18:47:30.299167] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:46.983 [2024-11-16 18:47:30.299192] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:46.983 [2024-11-16 18:47:30.299218] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:46.983 true 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:46.983 [2024-11-16 18:47:30.315311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60479 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60479 ']' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60479 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60479 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:46.983 killing process with pid 60479 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60479' 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60479 00:06:46.983 [2024-11-16 18:47:30.364549] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:46.983 [2024-11-16 18:47:30.364624] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:46.983 [2024-11-16 18:47:30.364684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:46.983 [2024-11-16 18:47:30.364695] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:46.983 18:47:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60479 00:06:46.983 [2024-11-16 18:47:30.381191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:48.370 18:47:31 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:48.370 00:06:48.370 real 0m2.135s 00:06:48.370 user 0m2.261s 00:06:48.370 sys 0m0.320s 00:06:48.370 18:47:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:48.370 18:47:31 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.370 ************************************ 00:06:48.370 END TEST raid0_resize_test 00:06:48.370 ************************************ 00:06:48.370 18:47:31 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:06:48.370 18:47:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:48.370 18:47:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:48.370 18:47:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:48.370 ************************************ 00:06:48.370 START TEST raid1_resize_test 00:06:48.370 ************************************ 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60536 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:48.370 Process raid pid: 60536 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60536' 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60536 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60536 ']' 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.370 18:47:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:48.370 [2024-11-16 18:47:31.573647] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:48.370 [2024-11-16 18:47:31.573784] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:48.370 [2024-11-16 18:47:31.731524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.649 [2024-11-16 18:47:31.837189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.649 [2024-11-16 18:47:32.031444] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.649 [2024-11-16 18:47:32.031487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:48.924 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.924 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:06:48.924 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:06:48.924 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.924 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.185 Base_1 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.185 Base_2 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.185 [2024-11-16 18:47:32.419962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:49.185 [2024-11-16 18:47:32.421686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:49.185 [2024-11-16 18:47:32.421758] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:06:49.185 [2024-11-16 18:47:32.421769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:06:49.185 [2024-11-16 18:47:32.422001] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:06:49.185 [2024-11-16 18:47:32.422162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:06:49.185 [2024-11-16 18:47:32.422176] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:06:49.185 [2024-11-16 18:47:32.422318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.185 [2024-11-16 18:47:32.431933] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.185 [2024-11-16 18:47:32.431963] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:49.185 true 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.185 [2024-11-16 18:47:32.448050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.185 [2024-11-16 18:47:32.491809] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:49.185 [2024-11-16 18:47:32.491832] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:49.185 [2024-11-16 18:47:32.491854] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:06:49.185 true 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:49.185 [2024-11-16 18:47:32.507960] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60536 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60536 ']' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60536 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60536 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60536' 00:06:49.185 killing process with pid 60536 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60536 00:06:49.185 [2024-11-16 18:47:32.590560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:49.185 [2024-11-16 18:47:32.590703] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:49.185 18:47:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60536 00:06:49.185 [2024-11-16 18:47:32.591183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:49.185 [2024-11-16 18:47:32.591260] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:06:49.185 [2024-11-16 18:47:32.607569] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:50.567 18:47:33 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:06:50.567 00:06:50.567 real 0m2.158s 00:06:50.567 user 0m2.292s 00:06:50.567 sys 0m0.326s 00:06:50.567 18:47:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:50.567 18:47:33 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.567 ************************************ 00:06:50.567 END TEST raid1_resize_test 00:06:50.567 ************************************ 00:06:50.567 18:47:33 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:06:50.567 18:47:33 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:06:50.567 18:47:33 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:50.567 18:47:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:50.567 18:47:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:50.567 18:47:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.567 ************************************ 00:06:50.567 START TEST raid_state_function_test 00:06:50.567 ************************************ 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60593 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60593' 00:06:50.567 Process raid pid: 60593 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60593 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60593 ']' 00:06:50.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.567 18:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.567 [2024-11-16 18:47:33.813313] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:50.567 [2024-11-16 18:47:33.813506] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.567 [2024-11-16 18:47:33.972748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.827 [2024-11-16 18:47:34.078634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.827 [2024-11-16 18:47:34.262554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:50.827 [2024-11-16 18:47:34.262710] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.397 [2024-11-16 18:47:34.639151] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:51.397 [2024-11-16 18:47:34.639202] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:51.397 [2024-11-16 18:47:34.639213] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.397 [2024-11-16 18:47:34.639223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.397 "name": "Existed_Raid", 00:06:51.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.397 "strip_size_kb": 64, 00:06:51.397 "state": "configuring", 00:06:51.397 "raid_level": "raid0", 00:06:51.397 "superblock": false, 00:06:51.397 "num_base_bdevs": 2, 00:06:51.397 "num_base_bdevs_discovered": 0, 00:06:51.397 "num_base_bdevs_operational": 2, 00:06:51.397 "base_bdevs_list": [ 00:06:51.397 { 00:06:51.397 "name": "BaseBdev1", 00:06:51.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.397 "is_configured": false, 00:06:51.397 "data_offset": 0, 00:06:51.397 "data_size": 0 00:06:51.397 }, 00:06:51.397 { 00:06:51.397 "name": "BaseBdev2", 00:06:51.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.397 "is_configured": false, 00:06:51.397 "data_offset": 0, 00:06:51.397 "data_size": 0 00:06:51.397 } 00:06:51.397 ] 00:06:51.397 }' 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.397 18:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.657 [2024-11-16 18:47:35.110275] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:51.657 [2024-11-16 18:47:35.110364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.657 [2024-11-16 18:47:35.122242] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:51.657 [2024-11-16 18:47:35.122324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:51.657 [2024-11-16 18:47:35.122381] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:51.657 [2024-11-16 18:47:35.122423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.657 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.918 [2024-11-16 18:47:35.166224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:51.918 BaseBdev1 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.918 [ 00:06:51.918 { 00:06:51.918 "name": "BaseBdev1", 00:06:51.918 "aliases": [ 00:06:51.918 "65280b8d-3468-425b-984e-45b0da9de5c6" 00:06:51.918 ], 00:06:51.918 "product_name": "Malloc disk", 00:06:51.918 "block_size": 512, 00:06:51.918 "num_blocks": 65536, 00:06:51.918 "uuid": "65280b8d-3468-425b-984e-45b0da9de5c6", 00:06:51.918 "assigned_rate_limits": { 00:06:51.918 "rw_ios_per_sec": 0, 00:06:51.918 "rw_mbytes_per_sec": 0, 00:06:51.918 "r_mbytes_per_sec": 0, 00:06:51.918 "w_mbytes_per_sec": 0 00:06:51.918 }, 00:06:51.918 "claimed": true, 00:06:51.918 "claim_type": "exclusive_write", 00:06:51.918 "zoned": false, 00:06:51.918 "supported_io_types": { 00:06:51.918 "read": true, 00:06:51.918 "write": true, 00:06:51.918 "unmap": true, 00:06:51.918 "flush": true, 00:06:51.918 "reset": true, 00:06:51.918 "nvme_admin": false, 00:06:51.918 "nvme_io": false, 00:06:51.918 "nvme_io_md": false, 00:06:51.918 "write_zeroes": true, 00:06:51.918 "zcopy": true, 00:06:51.918 "get_zone_info": false, 00:06:51.918 "zone_management": false, 00:06:51.918 "zone_append": false, 00:06:51.918 "compare": false, 00:06:51.918 "compare_and_write": false, 00:06:51.918 "abort": true, 00:06:51.918 "seek_hole": false, 00:06:51.918 "seek_data": false, 00:06:51.918 "copy": true, 00:06:51.918 "nvme_iov_md": false 00:06:51.918 }, 00:06:51.918 "memory_domains": [ 00:06:51.918 { 00:06:51.918 "dma_device_id": "system", 00:06:51.918 "dma_device_type": 1 00:06:51.918 }, 00:06:51.918 { 00:06:51.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:51.918 "dma_device_type": 2 00:06:51.918 } 00:06:51.918 ], 00:06:51.918 "driver_specific": {} 00:06:51.918 } 00:06:51.918 ] 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:51.918 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:51.919 "name": "Existed_Raid", 00:06:51.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.919 "strip_size_kb": 64, 00:06:51.919 "state": "configuring", 00:06:51.919 "raid_level": "raid0", 00:06:51.919 "superblock": false, 00:06:51.919 "num_base_bdevs": 2, 00:06:51.919 "num_base_bdevs_discovered": 1, 00:06:51.919 "num_base_bdevs_operational": 2, 00:06:51.919 "base_bdevs_list": [ 00:06:51.919 { 00:06:51.919 "name": "BaseBdev1", 00:06:51.919 "uuid": "65280b8d-3468-425b-984e-45b0da9de5c6", 00:06:51.919 "is_configured": true, 00:06:51.919 "data_offset": 0, 00:06:51.919 "data_size": 65536 00:06:51.919 }, 00:06:51.919 { 00:06:51.919 "name": "BaseBdev2", 00:06:51.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:51.919 "is_configured": false, 00:06:51.919 "data_offset": 0, 00:06:51.919 "data_size": 0 00:06:51.919 } 00:06:51.919 ] 00:06:51.919 }' 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:51.919 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.179 [2024-11-16 18:47:35.609539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:52.179 [2024-11-16 18:47:35.609593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.179 [2024-11-16 18:47:35.621561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:52.179 [2024-11-16 18:47:35.623477] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:52.179 [2024-11-16 18:47:35.623579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.179 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.439 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.439 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.439 "name": "Existed_Raid", 00:06:52.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.439 "strip_size_kb": 64, 00:06:52.439 "state": "configuring", 00:06:52.439 "raid_level": "raid0", 00:06:52.439 "superblock": false, 00:06:52.439 "num_base_bdevs": 2, 00:06:52.439 "num_base_bdevs_discovered": 1, 00:06:52.439 "num_base_bdevs_operational": 2, 00:06:52.439 "base_bdevs_list": [ 00:06:52.439 { 00:06:52.439 "name": "BaseBdev1", 00:06:52.439 "uuid": "65280b8d-3468-425b-984e-45b0da9de5c6", 00:06:52.439 "is_configured": true, 00:06:52.439 "data_offset": 0, 00:06:52.439 "data_size": 65536 00:06:52.439 }, 00:06:52.439 { 00:06:52.439 "name": "BaseBdev2", 00:06:52.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:52.439 "is_configured": false, 00:06:52.439 "data_offset": 0, 00:06:52.439 "data_size": 0 00:06:52.439 } 00:06:52.439 ] 00:06:52.439 }' 00:06:52.439 18:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.439 18:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.699 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:52.699 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.699 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.699 [2024-11-16 18:47:36.077015] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:52.699 [2024-11-16 18:47:36.077147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:52.699 [2024-11-16 18:47:36.077181] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:52.699 [2024-11-16 18:47:36.077646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:52.699 [2024-11-16 18:47:36.077896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:52.699 [2024-11-16 18:47:36.077950] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:52.699 [2024-11-16 18:47:36.078314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.699 BaseBdev2 00:06:52.699 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.699 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:52.699 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:52.699 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:52.699 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:06:52.699 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.700 [ 00:06:52.700 { 00:06:52.700 "name": "BaseBdev2", 00:06:52.700 "aliases": [ 00:06:52.700 "8064f9d7-64ce-4eb4-b7e2-5a22b769eeb6" 00:06:52.700 ], 00:06:52.700 "product_name": "Malloc disk", 00:06:52.700 "block_size": 512, 00:06:52.700 "num_blocks": 65536, 00:06:52.700 "uuid": "8064f9d7-64ce-4eb4-b7e2-5a22b769eeb6", 00:06:52.700 "assigned_rate_limits": { 00:06:52.700 "rw_ios_per_sec": 0, 00:06:52.700 "rw_mbytes_per_sec": 0, 00:06:52.700 "r_mbytes_per_sec": 0, 00:06:52.700 "w_mbytes_per_sec": 0 00:06:52.700 }, 00:06:52.700 "claimed": true, 00:06:52.700 "claim_type": "exclusive_write", 00:06:52.700 "zoned": false, 00:06:52.700 "supported_io_types": { 00:06:52.700 "read": true, 00:06:52.700 "write": true, 00:06:52.700 "unmap": true, 00:06:52.700 "flush": true, 00:06:52.700 "reset": true, 00:06:52.700 "nvme_admin": false, 00:06:52.700 "nvme_io": false, 00:06:52.700 "nvme_io_md": false, 00:06:52.700 "write_zeroes": true, 00:06:52.700 "zcopy": true, 00:06:52.700 "get_zone_info": false, 00:06:52.700 "zone_management": false, 00:06:52.700 "zone_append": false, 00:06:52.700 "compare": false, 00:06:52.700 "compare_and_write": false, 00:06:52.700 "abort": true, 00:06:52.700 "seek_hole": false, 00:06:52.700 "seek_data": false, 00:06:52.700 "copy": true, 00:06:52.700 "nvme_iov_md": false 00:06:52.700 }, 00:06:52.700 "memory_domains": [ 00:06:52.700 { 00:06:52.700 "dma_device_id": "system", 00:06:52.700 "dma_device_type": 1 00:06:52.700 }, 00:06:52.700 { 00:06:52.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:52.700 "dma_device_type": 2 00:06:52.700 } 00:06:52.700 ], 00:06:52.700 "driver_specific": {} 00:06:52.700 } 00:06:52.700 ] 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:52.700 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:52.960 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:52.960 "name": "Existed_Raid", 00:06:52.960 "uuid": "2ecaaf5c-2294-4c9d-ab4c-2785e2b1cd38", 00:06:52.960 "strip_size_kb": 64, 00:06:52.960 "state": "online", 00:06:52.960 "raid_level": "raid0", 00:06:52.960 "superblock": false, 00:06:52.960 "num_base_bdevs": 2, 00:06:52.960 "num_base_bdevs_discovered": 2, 00:06:52.960 "num_base_bdevs_operational": 2, 00:06:52.960 "base_bdevs_list": [ 00:06:52.960 { 00:06:52.960 "name": "BaseBdev1", 00:06:52.960 "uuid": "65280b8d-3468-425b-984e-45b0da9de5c6", 00:06:52.960 "is_configured": true, 00:06:52.960 "data_offset": 0, 00:06:52.960 "data_size": 65536 00:06:52.960 }, 00:06:52.960 { 00:06:52.960 "name": "BaseBdev2", 00:06:52.960 "uuid": "8064f9d7-64ce-4eb4-b7e2-5a22b769eeb6", 00:06:52.960 "is_configured": true, 00:06:52.960 "data_offset": 0, 00:06:52.960 "data_size": 65536 00:06:52.960 } 00:06:52.960 ] 00:06:52.960 }' 00:06:52.960 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:52.960 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.219 [2024-11-16 18:47:36.560526] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.219 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:53.219 "name": "Existed_Raid", 00:06:53.219 "aliases": [ 00:06:53.219 "2ecaaf5c-2294-4c9d-ab4c-2785e2b1cd38" 00:06:53.219 ], 00:06:53.219 "product_name": "Raid Volume", 00:06:53.219 "block_size": 512, 00:06:53.219 "num_blocks": 131072, 00:06:53.219 "uuid": "2ecaaf5c-2294-4c9d-ab4c-2785e2b1cd38", 00:06:53.219 "assigned_rate_limits": { 00:06:53.219 "rw_ios_per_sec": 0, 00:06:53.219 "rw_mbytes_per_sec": 0, 00:06:53.219 "r_mbytes_per_sec": 0, 00:06:53.219 "w_mbytes_per_sec": 0 00:06:53.219 }, 00:06:53.219 "claimed": false, 00:06:53.219 "zoned": false, 00:06:53.219 "supported_io_types": { 00:06:53.219 "read": true, 00:06:53.219 "write": true, 00:06:53.219 "unmap": true, 00:06:53.219 "flush": true, 00:06:53.219 "reset": true, 00:06:53.219 "nvme_admin": false, 00:06:53.220 "nvme_io": false, 00:06:53.220 "nvme_io_md": false, 00:06:53.220 "write_zeroes": true, 00:06:53.220 "zcopy": false, 00:06:53.220 "get_zone_info": false, 00:06:53.220 "zone_management": false, 00:06:53.220 "zone_append": false, 00:06:53.220 "compare": false, 00:06:53.220 "compare_and_write": false, 00:06:53.220 "abort": false, 00:06:53.220 "seek_hole": false, 00:06:53.220 "seek_data": false, 00:06:53.220 "copy": false, 00:06:53.220 "nvme_iov_md": false 00:06:53.220 }, 00:06:53.220 "memory_domains": [ 00:06:53.220 { 00:06:53.220 "dma_device_id": "system", 00:06:53.220 "dma_device_type": 1 00:06:53.220 }, 00:06:53.220 { 00:06:53.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.220 "dma_device_type": 2 00:06:53.220 }, 00:06:53.220 { 00:06:53.220 "dma_device_id": "system", 00:06:53.220 "dma_device_type": 1 00:06:53.220 }, 00:06:53.220 { 00:06:53.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:53.220 "dma_device_type": 2 00:06:53.220 } 00:06:53.220 ], 00:06:53.220 "driver_specific": { 00:06:53.220 "raid": { 00:06:53.220 "uuid": "2ecaaf5c-2294-4c9d-ab4c-2785e2b1cd38", 00:06:53.220 "strip_size_kb": 64, 00:06:53.220 "state": "online", 00:06:53.220 "raid_level": "raid0", 00:06:53.220 "superblock": false, 00:06:53.220 "num_base_bdevs": 2, 00:06:53.220 "num_base_bdevs_discovered": 2, 00:06:53.220 "num_base_bdevs_operational": 2, 00:06:53.220 "base_bdevs_list": [ 00:06:53.220 { 00:06:53.220 "name": "BaseBdev1", 00:06:53.220 "uuid": "65280b8d-3468-425b-984e-45b0da9de5c6", 00:06:53.220 "is_configured": true, 00:06:53.220 "data_offset": 0, 00:06:53.220 "data_size": 65536 00:06:53.220 }, 00:06:53.220 { 00:06:53.220 "name": "BaseBdev2", 00:06:53.220 "uuid": "8064f9d7-64ce-4eb4-b7e2-5a22b769eeb6", 00:06:53.220 "is_configured": true, 00:06:53.220 "data_offset": 0, 00:06:53.220 "data_size": 65536 00:06:53.220 } 00:06:53.220 ] 00:06:53.220 } 00:06:53.220 } 00:06:53.220 }' 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:53.220 BaseBdev2' 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.220 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.480 [2024-11-16 18:47:36.776062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:53.480 [2024-11-16 18:47:36.776099] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:53.480 [2024-11-16 18:47:36.776148] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:53.480 "name": "Existed_Raid", 00:06:53.480 "uuid": "2ecaaf5c-2294-4c9d-ab4c-2785e2b1cd38", 00:06:53.480 "strip_size_kb": 64, 00:06:53.480 "state": "offline", 00:06:53.480 "raid_level": "raid0", 00:06:53.480 "superblock": false, 00:06:53.480 "num_base_bdevs": 2, 00:06:53.480 "num_base_bdevs_discovered": 1, 00:06:53.480 "num_base_bdevs_operational": 1, 00:06:53.480 "base_bdevs_list": [ 00:06:53.480 { 00:06:53.480 "name": null, 00:06:53.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:53.480 "is_configured": false, 00:06:53.480 "data_offset": 0, 00:06:53.480 "data_size": 65536 00:06:53.480 }, 00:06:53.480 { 00:06:53.480 "name": "BaseBdev2", 00:06:53.480 "uuid": "8064f9d7-64ce-4eb4-b7e2-5a22b769eeb6", 00:06:53.480 "is_configured": true, 00:06:53.480 "data_offset": 0, 00:06:53.480 "data_size": 65536 00:06:53.480 } 00:06:53.480 ] 00:06:53.480 }' 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:53.480 18:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.050 [2024-11-16 18:47:37.341381] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:54.050 [2024-11-16 18:47:37.341439] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60593 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60593 ']' 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60593 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.050 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60593 00:06:54.310 killing process with pid 60593 00:06:54.310 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.310 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.310 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60593' 00:06:54.310 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60593 00:06:54.310 [2024-11-16 18:47:37.526689] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:54.310 18:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60593 00:06:54.310 [2024-11-16 18:47:37.542877] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:55.250 ************************************ 00:06:55.250 END TEST raid_state_function_test 00:06:55.250 ************************************ 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:06:55.250 00:06:55.250 real 0m4.875s 00:06:55.250 user 0m7.095s 00:06:55.250 sys 0m0.745s 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.250 18:47:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:06:55.250 18:47:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:55.250 18:47:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.250 18:47:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:55.250 ************************************ 00:06:55.250 START TEST raid_state_function_test_sb 00:06:55.250 ************************************ 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:06:55.250 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60841 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60841' 00:06:55.251 Process raid pid: 60841 00:06:55.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60841 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60841 ']' 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:55.251 18:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:55.510 [2024-11-16 18:47:38.747407] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:55.510 [2024-11-16 18:47:38.747624] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.510 [2024-11-16 18:47:38.920963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.770 [2024-11-16 18:47:39.032160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.770 [2024-11-16 18:47:39.226464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:55.770 [2024-11-16 18:47:39.226582] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.340 [2024-11-16 18:47:39.568929] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:56.340 [2024-11-16 18:47:39.569037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:56.340 [2024-11-16 18:47:39.569083] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.340 [2024-11-16 18:47:39.569114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.340 "name": "Existed_Raid", 00:06:56.340 "uuid": "b6e12ffb-b8d9-4d86-aea6-86a5e3e7a502", 00:06:56.340 "strip_size_kb": 64, 00:06:56.340 "state": "configuring", 00:06:56.340 "raid_level": "raid0", 00:06:56.340 "superblock": true, 00:06:56.340 "num_base_bdevs": 2, 00:06:56.340 "num_base_bdevs_discovered": 0, 00:06:56.340 "num_base_bdevs_operational": 2, 00:06:56.340 "base_bdevs_list": [ 00:06:56.340 { 00:06:56.340 "name": "BaseBdev1", 00:06:56.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.340 "is_configured": false, 00:06:56.340 "data_offset": 0, 00:06:56.340 "data_size": 0 00:06:56.340 }, 00:06:56.340 { 00:06:56.340 "name": "BaseBdev2", 00:06:56.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.340 "is_configured": false, 00:06:56.340 "data_offset": 0, 00:06:56.340 "data_size": 0 00:06:56.340 } 00:06:56.340 ] 00:06:56.340 }' 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.340 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.601 [2024-11-16 18:47:39.980186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:56.601 [2024-11-16 18:47:39.980219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.601 [2024-11-16 18:47:39.988153] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:56.601 [2024-11-16 18:47:39.988243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:56.601 [2024-11-16 18:47:39.988285] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.601 [2024-11-16 18:47:39.988331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.601 18:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.601 [2024-11-16 18:47:40.032025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.601 BaseBdev1 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.601 [ 00:06:56.601 { 00:06:56.601 "name": "BaseBdev1", 00:06:56.601 "aliases": [ 00:06:56.601 "a626110d-3014-4bc6-8be2-7b43b7012728" 00:06:56.601 ], 00:06:56.601 "product_name": "Malloc disk", 00:06:56.601 "block_size": 512, 00:06:56.601 "num_blocks": 65536, 00:06:56.601 "uuid": "a626110d-3014-4bc6-8be2-7b43b7012728", 00:06:56.601 "assigned_rate_limits": { 00:06:56.601 "rw_ios_per_sec": 0, 00:06:56.601 "rw_mbytes_per_sec": 0, 00:06:56.601 "r_mbytes_per_sec": 0, 00:06:56.601 "w_mbytes_per_sec": 0 00:06:56.601 }, 00:06:56.601 "claimed": true, 00:06:56.601 "claim_type": "exclusive_write", 00:06:56.601 "zoned": false, 00:06:56.601 "supported_io_types": { 00:06:56.601 "read": true, 00:06:56.601 "write": true, 00:06:56.601 "unmap": true, 00:06:56.601 "flush": true, 00:06:56.601 "reset": true, 00:06:56.601 "nvme_admin": false, 00:06:56.601 "nvme_io": false, 00:06:56.601 "nvme_io_md": false, 00:06:56.601 "write_zeroes": true, 00:06:56.601 "zcopy": true, 00:06:56.601 "get_zone_info": false, 00:06:56.601 "zone_management": false, 00:06:56.601 "zone_append": false, 00:06:56.601 "compare": false, 00:06:56.601 "compare_and_write": false, 00:06:56.601 "abort": true, 00:06:56.601 "seek_hole": false, 00:06:56.601 "seek_data": false, 00:06:56.601 "copy": true, 00:06:56.601 "nvme_iov_md": false 00:06:56.601 }, 00:06:56.601 "memory_domains": [ 00:06:56.601 { 00:06:56.601 "dma_device_id": "system", 00:06:56.601 "dma_device_type": 1 00:06:56.601 }, 00:06:56.601 { 00:06:56.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.601 "dma_device_type": 2 00:06:56.601 } 00:06:56.601 ], 00:06:56.601 "driver_specific": {} 00:06:56.601 } 00:06:56.601 ] 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:56.601 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:56.861 "name": "Existed_Raid", 00:06:56.861 "uuid": "3517bd26-8cf6-44e3-9761-afc51dc499cc", 00:06:56.861 "strip_size_kb": 64, 00:06:56.861 "state": "configuring", 00:06:56.861 "raid_level": "raid0", 00:06:56.861 "superblock": true, 00:06:56.861 "num_base_bdevs": 2, 00:06:56.861 "num_base_bdevs_discovered": 1, 00:06:56.861 "num_base_bdevs_operational": 2, 00:06:56.861 "base_bdevs_list": [ 00:06:56.861 { 00:06:56.861 "name": "BaseBdev1", 00:06:56.861 "uuid": "a626110d-3014-4bc6-8be2-7b43b7012728", 00:06:56.861 "is_configured": true, 00:06:56.861 "data_offset": 2048, 00:06:56.861 "data_size": 63488 00:06:56.861 }, 00:06:56.861 { 00:06:56.861 "name": "BaseBdev2", 00:06:56.861 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:56.861 "is_configured": false, 00:06:56.861 "data_offset": 0, 00:06:56.861 "data_size": 0 00:06:56.861 } 00:06:56.861 ] 00:06:56.861 }' 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:56.861 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.121 [2024-11-16 18:47:40.507289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:57.121 [2024-11-16 18:47:40.507349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.121 [2024-11-16 18:47:40.519348] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:57.121 [2024-11-16 18:47:40.521367] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:57.121 [2024-11-16 18:47:40.521463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.121 "name": "Existed_Raid", 00:06:57.121 "uuid": "8922f81b-e1a8-4115-a312-6fd246797537", 00:06:57.121 "strip_size_kb": 64, 00:06:57.121 "state": "configuring", 00:06:57.121 "raid_level": "raid0", 00:06:57.121 "superblock": true, 00:06:57.121 "num_base_bdevs": 2, 00:06:57.121 "num_base_bdevs_discovered": 1, 00:06:57.121 "num_base_bdevs_operational": 2, 00:06:57.121 "base_bdevs_list": [ 00:06:57.121 { 00:06:57.121 "name": "BaseBdev1", 00:06:57.121 "uuid": "a626110d-3014-4bc6-8be2-7b43b7012728", 00:06:57.121 "is_configured": true, 00:06:57.121 "data_offset": 2048, 00:06:57.121 "data_size": 63488 00:06:57.121 }, 00:06:57.121 { 00:06:57.121 "name": "BaseBdev2", 00:06:57.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.121 "is_configured": false, 00:06:57.121 "data_offset": 0, 00:06:57.121 "data_size": 0 00:06:57.121 } 00:06:57.121 ] 00:06:57.121 }' 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.121 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.690 [2024-11-16 18:47:40.904359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:57.690 [2024-11-16 18:47:40.904734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:06:57.690 [2024-11-16 18:47:40.904790] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:06:57.690 [2024-11-16 18:47:40.905084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:06:57.690 [2024-11-16 18:47:40.905293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:06:57.690 [2024-11-16 18:47:40.905343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:06:57.690 BaseBdev2 00:06:57.690 [2024-11-16 18:47:40.905573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.690 [ 00:06:57.690 { 00:06:57.690 "name": "BaseBdev2", 00:06:57.690 "aliases": [ 00:06:57.690 "56691a69-d818-4171-a397-4ded84d32943" 00:06:57.690 ], 00:06:57.690 "product_name": "Malloc disk", 00:06:57.690 "block_size": 512, 00:06:57.690 "num_blocks": 65536, 00:06:57.690 "uuid": "56691a69-d818-4171-a397-4ded84d32943", 00:06:57.690 "assigned_rate_limits": { 00:06:57.690 "rw_ios_per_sec": 0, 00:06:57.690 "rw_mbytes_per_sec": 0, 00:06:57.690 "r_mbytes_per_sec": 0, 00:06:57.690 "w_mbytes_per_sec": 0 00:06:57.690 }, 00:06:57.690 "claimed": true, 00:06:57.690 "claim_type": "exclusive_write", 00:06:57.690 "zoned": false, 00:06:57.690 "supported_io_types": { 00:06:57.690 "read": true, 00:06:57.690 "write": true, 00:06:57.690 "unmap": true, 00:06:57.690 "flush": true, 00:06:57.690 "reset": true, 00:06:57.690 "nvme_admin": false, 00:06:57.690 "nvme_io": false, 00:06:57.690 "nvme_io_md": false, 00:06:57.690 "write_zeroes": true, 00:06:57.690 "zcopy": true, 00:06:57.690 "get_zone_info": false, 00:06:57.690 "zone_management": false, 00:06:57.690 "zone_append": false, 00:06:57.690 "compare": false, 00:06:57.690 "compare_and_write": false, 00:06:57.690 "abort": true, 00:06:57.690 "seek_hole": false, 00:06:57.690 "seek_data": false, 00:06:57.690 "copy": true, 00:06:57.690 "nvme_iov_md": false 00:06:57.690 }, 00:06:57.690 "memory_domains": [ 00:06:57.690 { 00:06:57.690 "dma_device_id": "system", 00:06:57.690 "dma_device_type": 1 00:06:57.690 }, 00:06:57.690 { 00:06:57.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.690 "dma_device_type": 2 00:06:57.690 } 00:06:57.690 ], 00:06:57.690 "driver_specific": {} 00:06:57.690 } 00:06:57.690 ] 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:06:57.690 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:57.691 "name": "Existed_Raid", 00:06:57.691 "uuid": "8922f81b-e1a8-4115-a312-6fd246797537", 00:06:57.691 "strip_size_kb": 64, 00:06:57.691 "state": "online", 00:06:57.691 "raid_level": "raid0", 00:06:57.691 "superblock": true, 00:06:57.691 "num_base_bdevs": 2, 00:06:57.691 "num_base_bdevs_discovered": 2, 00:06:57.691 "num_base_bdevs_operational": 2, 00:06:57.691 "base_bdevs_list": [ 00:06:57.691 { 00:06:57.691 "name": "BaseBdev1", 00:06:57.691 "uuid": "a626110d-3014-4bc6-8be2-7b43b7012728", 00:06:57.691 "is_configured": true, 00:06:57.691 "data_offset": 2048, 00:06:57.691 "data_size": 63488 00:06:57.691 }, 00:06:57.691 { 00:06:57.691 "name": "BaseBdev2", 00:06:57.691 "uuid": "56691a69-d818-4171-a397-4ded84d32943", 00:06:57.691 "is_configured": true, 00:06:57.691 "data_offset": 2048, 00:06:57.691 "data_size": 63488 00:06:57.691 } 00:06:57.691 ] 00:06:57.691 }' 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:57.691 18:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.966 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:06:57.966 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:06:57.966 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:06:57.966 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:06:57.966 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:06:57.966 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:06:57.966 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:06:57.966 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:06:57.967 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:57.967 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:57.967 [2024-11-16 18:47:41.324024] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:57.967 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:57.967 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:06:57.967 "name": "Existed_Raid", 00:06:57.967 "aliases": [ 00:06:57.967 "8922f81b-e1a8-4115-a312-6fd246797537" 00:06:57.967 ], 00:06:57.967 "product_name": "Raid Volume", 00:06:57.967 "block_size": 512, 00:06:57.967 "num_blocks": 126976, 00:06:57.967 "uuid": "8922f81b-e1a8-4115-a312-6fd246797537", 00:06:57.967 "assigned_rate_limits": { 00:06:57.967 "rw_ios_per_sec": 0, 00:06:57.967 "rw_mbytes_per_sec": 0, 00:06:57.967 "r_mbytes_per_sec": 0, 00:06:57.967 "w_mbytes_per_sec": 0 00:06:57.967 }, 00:06:57.967 "claimed": false, 00:06:57.967 "zoned": false, 00:06:57.967 "supported_io_types": { 00:06:57.967 "read": true, 00:06:57.967 "write": true, 00:06:57.967 "unmap": true, 00:06:57.967 "flush": true, 00:06:57.967 "reset": true, 00:06:57.967 "nvme_admin": false, 00:06:57.967 "nvme_io": false, 00:06:57.967 "nvme_io_md": false, 00:06:57.967 "write_zeroes": true, 00:06:57.967 "zcopy": false, 00:06:57.967 "get_zone_info": false, 00:06:57.967 "zone_management": false, 00:06:57.967 "zone_append": false, 00:06:57.967 "compare": false, 00:06:57.967 "compare_and_write": false, 00:06:57.967 "abort": false, 00:06:57.967 "seek_hole": false, 00:06:57.967 "seek_data": false, 00:06:57.967 "copy": false, 00:06:57.967 "nvme_iov_md": false 00:06:57.967 }, 00:06:57.967 "memory_domains": [ 00:06:57.967 { 00:06:57.967 "dma_device_id": "system", 00:06:57.967 "dma_device_type": 1 00:06:57.967 }, 00:06:57.967 { 00:06:57.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.967 "dma_device_type": 2 00:06:57.967 }, 00:06:57.967 { 00:06:57.967 "dma_device_id": "system", 00:06:57.967 "dma_device_type": 1 00:06:57.967 }, 00:06:57.967 { 00:06:57.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.967 "dma_device_type": 2 00:06:57.967 } 00:06:57.967 ], 00:06:57.967 "driver_specific": { 00:06:57.967 "raid": { 00:06:57.967 "uuid": "8922f81b-e1a8-4115-a312-6fd246797537", 00:06:57.967 "strip_size_kb": 64, 00:06:57.967 "state": "online", 00:06:57.967 "raid_level": "raid0", 00:06:57.967 "superblock": true, 00:06:57.967 "num_base_bdevs": 2, 00:06:57.967 "num_base_bdevs_discovered": 2, 00:06:57.967 "num_base_bdevs_operational": 2, 00:06:57.967 "base_bdevs_list": [ 00:06:57.967 { 00:06:57.967 "name": "BaseBdev1", 00:06:57.967 "uuid": "a626110d-3014-4bc6-8be2-7b43b7012728", 00:06:57.967 "is_configured": true, 00:06:57.967 "data_offset": 2048, 00:06:57.967 "data_size": 63488 00:06:57.967 }, 00:06:57.967 { 00:06:57.967 "name": "BaseBdev2", 00:06:57.967 "uuid": "56691a69-d818-4171-a397-4ded84d32943", 00:06:57.967 "is_configured": true, 00:06:57.967 "data_offset": 2048, 00:06:57.967 "data_size": 63488 00:06:57.967 } 00:06:57.967 ] 00:06:57.967 } 00:06:57.967 } 00:06:57.967 }' 00:06:57.967 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:06:57.967 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:06:57.967 BaseBdev2' 00:06:57.967 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:57.967 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:06:57.967 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 [2024-11-16 18:47:41.519374] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:06:58.230 [2024-11-16 18:47:41.519448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:06:58.230 [2024-11-16 18:47:41.519532] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:06:58.230 "name": "Existed_Raid", 00:06:58.230 "uuid": "8922f81b-e1a8-4115-a312-6fd246797537", 00:06:58.230 "strip_size_kb": 64, 00:06:58.230 "state": "offline", 00:06:58.230 "raid_level": "raid0", 00:06:58.230 "superblock": true, 00:06:58.230 "num_base_bdevs": 2, 00:06:58.230 "num_base_bdevs_discovered": 1, 00:06:58.230 "num_base_bdevs_operational": 1, 00:06:58.230 "base_bdevs_list": [ 00:06:58.230 { 00:06:58.230 "name": null, 00:06:58.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.230 "is_configured": false, 00:06:58.230 "data_offset": 0, 00:06:58.230 "data_size": 63488 00:06:58.230 }, 00:06:58.230 { 00:06:58.230 "name": "BaseBdev2", 00:06:58.230 "uuid": "56691a69-d818-4171-a397-4ded84d32943", 00:06:58.230 "is_configured": true, 00:06:58.230 "data_offset": 2048, 00:06:58.230 "data_size": 63488 00:06:58.230 } 00:06:58.230 ] 00:06:58.230 }' 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:06:58.230 18:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.800 [2024-11-16 18:47:42.066493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:06:58.800 [2024-11-16 18:47:42.066622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60841 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60841 ']' 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60841 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60841 00:06:58.800 killing process with pid 60841 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60841' 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60841 00:06:58.800 [2024-11-16 18:47:42.251636] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:58.800 18:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60841 00:06:58.800 [2024-11-16 18:47:42.267951] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:00.216 18:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:00.216 00:07:00.216 real 0m4.694s 00:07:00.216 user 0m6.736s 00:07:00.216 sys 0m0.701s 00:07:00.216 18:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.216 18:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:00.216 ************************************ 00:07:00.216 END TEST raid_state_function_test_sb 00:07:00.216 ************************************ 00:07:00.216 18:47:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:00.216 18:47:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:00.216 18:47:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.216 18:47:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:00.216 ************************************ 00:07:00.216 START TEST raid_superblock_test 00:07:00.216 ************************************ 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61093 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61093 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61093 ']' 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.216 18:47:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.216 [2024-11-16 18:47:43.501216] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:00.216 [2024-11-16 18:47:43.501436] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61093 ] 00:07:00.216 [2024-11-16 18:47:43.672290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.476 [2024-11-16 18:47:43.782808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.735 [2024-11-16 18:47:43.979750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.735 [2024-11-16 18:47:43.979888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.996 malloc1 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.996 [2024-11-16 18:47:44.367037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:00.996 [2024-11-16 18:47:44.367174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.996 [2024-11-16 18:47:44.367236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:00.996 [2024-11-16 18:47:44.367274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.996 [2024-11-16 18:47:44.369384] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.996 [2024-11-16 18:47:44.369468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:00.996 pt1 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.996 malloc2 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.996 [2024-11-16 18:47:44.426005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:00.996 [2024-11-16 18:47:44.426059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:00.996 [2024-11-16 18:47:44.426081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:00.996 [2024-11-16 18:47:44.426090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:00.996 [2024-11-16 18:47:44.428125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:00.996 [2024-11-16 18:47:44.428160] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:00.996 pt2 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.996 [2024-11-16 18:47:44.438044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:00.996 [2024-11-16 18:47:44.439823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:00.996 [2024-11-16 18:47:44.439989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:00.996 [2024-11-16 18:47:44.440002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:00.996 [2024-11-16 18:47:44.440237] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:00.996 [2024-11-16 18:47:44.440389] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:00.996 [2024-11-16 18:47:44.440400] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:00.996 [2024-11-16 18:47:44.440545] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:00.996 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.256 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:01.256 "name": "raid_bdev1", 00:07:01.256 "uuid": "39badcb4-4a7f-45b1-aaf6-7d59a1e22ced", 00:07:01.256 "strip_size_kb": 64, 00:07:01.256 "state": "online", 00:07:01.256 "raid_level": "raid0", 00:07:01.256 "superblock": true, 00:07:01.256 "num_base_bdevs": 2, 00:07:01.256 "num_base_bdevs_discovered": 2, 00:07:01.256 "num_base_bdevs_operational": 2, 00:07:01.256 "base_bdevs_list": [ 00:07:01.256 { 00:07:01.256 "name": "pt1", 00:07:01.256 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.256 "is_configured": true, 00:07:01.256 "data_offset": 2048, 00:07:01.256 "data_size": 63488 00:07:01.256 }, 00:07:01.257 { 00:07:01.257 "name": "pt2", 00:07:01.257 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.257 "is_configured": true, 00:07:01.257 "data_offset": 2048, 00:07:01.257 "data_size": 63488 00:07:01.257 } 00:07:01.257 ] 00:07:01.257 }' 00:07:01.257 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:01.257 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:01.517 [2024-11-16 18:47:44.833604] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:01.517 "name": "raid_bdev1", 00:07:01.517 "aliases": [ 00:07:01.517 "39badcb4-4a7f-45b1-aaf6-7d59a1e22ced" 00:07:01.517 ], 00:07:01.517 "product_name": "Raid Volume", 00:07:01.517 "block_size": 512, 00:07:01.517 "num_blocks": 126976, 00:07:01.517 "uuid": "39badcb4-4a7f-45b1-aaf6-7d59a1e22ced", 00:07:01.517 "assigned_rate_limits": { 00:07:01.517 "rw_ios_per_sec": 0, 00:07:01.517 "rw_mbytes_per_sec": 0, 00:07:01.517 "r_mbytes_per_sec": 0, 00:07:01.517 "w_mbytes_per_sec": 0 00:07:01.517 }, 00:07:01.517 "claimed": false, 00:07:01.517 "zoned": false, 00:07:01.517 "supported_io_types": { 00:07:01.517 "read": true, 00:07:01.517 "write": true, 00:07:01.517 "unmap": true, 00:07:01.517 "flush": true, 00:07:01.517 "reset": true, 00:07:01.517 "nvme_admin": false, 00:07:01.517 "nvme_io": false, 00:07:01.517 "nvme_io_md": false, 00:07:01.517 "write_zeroes": true, 00:07:01.517 "zcopy": false, 00:07:01.517 "get_zone_info": false, 00:07:01.517 "zone_management": false, 00:07:01.517 "zone_append": false, 00:07:01.517 "compare": false, 00:07:01.517 "compare_and_write": false, 00:07:01.517 "abort": false, 00:07:01.517 "seek_hole": false, 00:07:01.517 "seek_data": false, 00:07:01.517 "copy": false, 00:07:01.517 "nvme_iov_md": false 00:07:01.517 }, 00:07:01.517 "memory_domains": [ 00:07:01.517 { 00:07:01.517 "dma_device_id": "system", 00:07:01.517 "dma_device_type": 1 00:07:01.517 }, 00:07:01.517 { 00:07:01.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.517 "dma_device_type": 2 00:07:01.517 }, 00:07:01.517 { 00:07:01.517 "dma_device_id": "system", 00:07:01.517 "dma_device_type": 1 00:07:01.517 }, 00:07:01.517 { 00:07:01.517 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.517 "dma_device_type": 2 00:07:01.517 } 00:07:01.517 ], 00:07:01.517 "driver_specific": { 00:07:01.517 "raid": { 00:07:01.517 "uuid": "39badcb4-4a7f-45b1-aaf6-7d59a1e22ced", 00:07:01.517 "strip_size_kb": 64, 00:07:01.517 "state": "online", 00:07:01.517 "raid_level": "raid0", 00:07:01.517 "superblock": true, 00:07:01.517 "num_base_bdevs": 2, 00:07:01.517 "num_base_bdevs_discovered": 2, 00:07:01.517 "num_base_bdevs_operational": 2, 00:07:01.517 "base_bdevs_list": [ 00:07:01.517 { 00:07:01.517 "name": "pt1", 00:07:01.517 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:01.517 "is_configured": true, 00:07:01.517 "data_offset": 2048, 00:07:01.517 "data_size": 63488 00:07:01.517 }, 00:07:01.517 { 00:07:01.517 "name": "pt2", 00:07:01.517 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:01.517 "is_configured": true, 00:07:01.517 "data_offset": 2048, 00:07:01.517 "data_size": 63488 00:07:01.517 } 00:07:01.517 ] 00:07:01.517 } 00:07:01.517 } 00:07:01.517 }' 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:01.517 pt2' 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.517 18:47:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.778 18:47:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:01.778 [2024-11-16 18:47:45.065150] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=39badcb4-4a7f-45b1-aaf6-7d59a1e22ced 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 39badcb4-4a7f-45b1-aaf6-7d59a1e22ced ']' 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.778 [2024-11-16 18:47:45.112810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:01.778 [2024-11-16 18:47:45.112832] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.778 [2024-11-16 18:47:45.112907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.778 [2024-11-16 18:47:45.112956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:01.778 [2024-11-16 18:47:45.112967] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.778 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:01.778 [2024-11-16 18:47:45.244643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:01.778 [2024-11-16 18:47:45.246571] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:01.778 [2024-11-16 18:47:45.246640] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:01.778 [2024-11-16 18:47:45.246697] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:01.778 [2024-11-16 18:47:45.246712] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:01.778 [2024-11-16 18:47:45.246725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:02.039 request: 00:07:02.039 { 00:07:02.039 "name": "raid_bdev1", 00:07:02.039 "raid_level": "raid0", 00:07:02.039 "base_bdevs": [ 00:07:02.039 "malloc1", 00:07:02.039 "malloc2" 00:07:02.039 ], 00:07:02.039 "strip_size_kb": 64, 00:07:02.039 "superblock": false, 00:07:02.039 "method": "bdev_raid_create", 00:07:02.039 "req_id": 1 00:07:02.039 } 00:07:02.039 Got JSON-RPC error response 00:07:02.039 response: 00:07:02.039 { 00:07:02.039 "code": -17, 00:07:02.039 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:02.039 } 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.039 [2024-11-16 18:47:45.312505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:02.039 [2024-11-16 18:47:45.312640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.039 [2024-11-16 18:47:45.312714] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:02.039 [2024-11-16 18:47:45.312764] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.039 [2024-11-16 18:47:45.315049] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.039 [2024-11-16 18:47:45.315138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:02.039 [2024-11-16 18:47:45.315287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:02.039 [2024-11-16 18:47:45.315420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:02.039 pt1 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.039 "name": "raid_bdev1", 00:07:02.039 "uuid": "39badcb4-4a7f-45b1-aaf6-7d59a1e22ced", 00:07:02.039 "strip_size_kb": 64, 00:07:02.039 "state": "configuring", 00:07:02.039 "raid_level": "raid0", 00:07:02.039 "superblock": true, 00:07:02.039 "num_base_bdevs": 2, 00:07:02.039 "num_base_bdevs_discovered": 1, 00:07:02.039 "num_base_bdevs_operational": 2, 00:07:02.039 "base_bdevs_list": [ 00:07:02.039 { 00:07:02.039 "name": "pt1", 00:07:02.039 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.039 "is_configured": true, 00:07:02.039 "data_offset": 2048, 00:07:02.039 "data_size": 63488 00:07:02.039 }, 00:07:02.039 { 00:07:02.039 "name": null, 00:07:02.039 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.039 "is_configured": false, 00:07:02.039 "data_offset": 2048, 00:07:02.039 "data_size": 63488 00:07:02.039 } 00:07:02.039 ] 00:07:02.039 }' 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.039 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.300 [2024-11-16 18:47:45.755795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:02.300 [2024-11-16 18:47:45.755868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:02.300 [2024-11-16 18:47:45.755899] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:02.300 [2024-11-16 18:47:45.755909] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:02.300 [2024-11-16 18:47:45.756394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:02.300 [2024-11-16 18:47:45.756415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:02.300 [2024-11-16 18:47:45.756496] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:02.300 [2024-11-16 18:47:45.756519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:02.300 [2024-11-16 18:47:45.756638] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:02.300 [2024-11-16 18:47:45.756648] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:02.300 [2024-11-16 18:47:45.756987] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:02.300 [2024-11-16 18:47:45.757200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:02.300 [2024-11-16 18:47:45.757247] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:02.300 [2024-11-16 18:47:45.757461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:02.300 pt2 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.300 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.560 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.560 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:02.560 "name": "raid_bdev1", 00:07:02.560 "uuid": "39badcb4-4a7f-45b1-aaf6-7d59a1e22ced", 00:07:02.560 "strip_size_kb": 64, 00:07:02.560 "state": "online", 00:07:02.560 "raid_level": "raid0", 00:07:02.560 "superblock": true, 00:07:02.560 "num_base_bdevs": 2, 00:07:02.560 "num_base_bdevs_discovered": 2, 00:07:02.560 "num_base_bdevs_operational": 2, 00:07:02.560 "base_bdevs_list": [ 00:07:02.560 { 00:07:02.560 "name": "pt1", 00:07:02.560 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.560 "is_configured": true, 00:07:02.560 "data_offset": 2048, 00:07:02.560 "data_size": 63488 00:07:02.560 }, 00:07:02.560 { 00:07:02.560 "name": "pt2", 00:07:02.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.560 "is_configured": true, 00:07:02.560 "data_offset": 2048, 00:07:02.560 "data_size": 63488 00:07:02.560 } 00:07:02.560 ] 00:07:02.560 }' 00:07:02.560 18:47:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:02.560 18:47:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.820 [2024-11-16 18:47:46.175318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:02.820 "name": "raid_bdev1", 00:07:02.820 "aliases": [ 00:07:02.820 "39badcb4-4a7f-45b1-aaf6-7d59a1e22ced" 00:07:02.820 ], 00:07:02.820 "product_name": "Raid Volume", 00:07:02.820 "block_size": 512, 00:07:02.820 "num_blocks": 126976, 00:07:02.820 "uuid": "39badcb4-4a7f-45b1-aaf6-7d59a1e22ced", 00:07:02.820 "assigned_rate_limits": { 00:07:02.820 "rw_ios_per_sec": 0, 00:07:02.820 "rw_mbytes_per_sec": 0, 00:07:02.820 "r_mbytes_per_sec": 0, 00:07:02.820 "w_mbytes_per_sec": 0 00:07:02.820 }, 00:07:02.820 "claimed": false, 00:07:02.820 "zoned": false, 00:07:02.820 "supported_io_types": { 00:07:02.820 "read": true, 00:07:02.820 "write": true, 00:07:02.820 "unmap": true, 00:07:02.820 "flush": true, 00:07:02.820 "reset": true, 00:07:02.820 "nvme_admin": false, 00:07:02.820 "nvme_io": false, 00:07:02.820 "nvme_io_md": false, 00:07:02.820 "write_zeroes": true, 00:07:02.820 "zcopy": false, 00:07:02.820 "get_zone_info": false, 00:07:02.820 "zone_management": false, 00:07:02.820 "zone_append": false, 00:07:02.820 "compare": false, 00:07:02.820 "compare_and_write": false, 00:07:02.820 "abort": false, 00:07:02.820 "seek_hole": false, 00:07:02.820 "seek_data": false, 00:07:02.820 "copy": false, 00:07:02.820 "nvme_iov_md": false 00:07:02.820 }, 00:07:02.820 "memory_domains": [ 00:07:02.820 { 00:07:02.820 "dma_device_id": "system", 00:07:02.820 "dma_device_type": 1 00:07:02.820 }, 00:07:02.820 { 00:07:02.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.820 "dma_device_type": 2 00:07:02.820 }, 00:07:02.820 { 00:07:02.820 "dma_device_id": "system", 00:07:02.820 "dma_device_type": 1 00:07:02.820 }, 00:07:02.820 { 00:07:02.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:02.820 "dma_device_type": 2 00:07:02.820 } 00:07:02.820 ], 00:07:02.820 "driver_specific": { 00:07:02.820 "raid": { 00:07:02.820 "uuid": "39badcb4-4a7f-45b1-aaf6-7d59a1e22ced", 00:07:02.820 "strip_size_kb": 64, 00:07:02.820 "state": "online", 00:07:02.820 "raid_level": "raid0", 00:07:02.820 "superblock": true, 00:07:02.820 "num_base_bdevs": 2, 00:07:02.820 "num_base_bdevs_discovered": 2, 00:07:02.820 "num_base_bdevs_operational": 2, 00:07:02.820 "base_bdevs_list": [ 00:07:02.820 { 00:07:02.820 "name": "pt1", 00:07:02.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:02.820 "is_configured": true, 00:07:02.820 "data_offset": 2048, 00:07:02.820 "data_size": 63488 00:07:02.820 }, 00:07:02.820 { 00:07:02.820 "name": "pt2", 00:07:02.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:02.820 "is_configured": true, 00:07:02.820 "data_offset": 2048, 00:07:02.820 "data_size": 63488 00:07:02.820 } 00:07:02.820 ] 00:07:02.820 } 00:07:02.820 } 00:07:02.820 }' 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:02.820 pt2' 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.820 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.080 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.080 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.080 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.080 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:03.080 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:03.080 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.081 [2024-11-16 18:47:46.362972] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 39badcb4-4a7f-45b1-aaf6-7d59a1e22ced '!=' 39badcb4-4a7f-45b1-aaf6-7d59a1e22ced ']' 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61093 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61093 ']' 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61093 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61093 00:07:03.081 killing process with pid 61093 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61093' 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61093 00:07:03.081 [2024-11-16 18:47:46.434030] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:03.081 [2024-11-16 18:47:46.434118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:03.081 [2024-11-16 18:47:46.434164] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:03.081 [2024-11-16 18:47:46.434175] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:03.081 18:47:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61093 00:07:03.340 [2024-11-16 18:47:46.635517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:04.277 18:47:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:04.277 00:07:04.277 real 0m4.308s 00:07:04.277 user 0m6.021s 00:07:04.277 sys 0m0.692s 00:07:04.277 18:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.277 18:47:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.277 ************************************ 00:07:04.277 END TEST raid_superblock_test 00:07:04.277 ************************************ 00:07:04.537 18:47:47 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:04.537 18:47:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:04.537 18:47:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.537 18:47:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:04.537 ************************************ 00:07:04.537 START TEST raid_read_error_test 00:07:04.537 ************************************ 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1icYtNIrzF 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61299 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61299 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61299 ']' 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.537 18:47:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:04.537 [2024-11-16 18:47:47.891382] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:04.537 [2024-11-16 18:47:47.891579] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61299 ] 00:07:04.796 [2024-11-16 18:47:48.074699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.796 [2024-11-16 18:47:48.188878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.055 [2024-11-16 18:47:48.391273] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.055 [2024-11-16 18:47:48.391409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.315 BaseBdev1_malloc 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.315 true 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.315 [2024-11-16 18:47:48.779175] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:05.315 [2024-11-16 18:47:48.779271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.315 [2024-11-16 18:47:48.779294] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:05.315 [2024-11-16 18:47:48.779306] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.315 [2024-11-16 18:47:48.781482] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.315 [2024-11-16 18:47:48.781522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:05.315 BaseBdev1 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.315 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.575 BaseBdev2_malloc 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.575 true 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.575 [2024-11-16 18:47:48.844557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:05.575 [2024-11-16 18:47:48.844612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:05.575 [2024-11-16 18:47:48.844629] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:05.575 [2024-11-16 18:47:48.844640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:05.575 [2024-11-16 18:47:48.846798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:05.575 [2024-11-16 18:47:48.846836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:05.575 BaseBdev2 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.575 [2024-11-16 18:47:48.856601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.575 [2024-11-16 18:47:48.858590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:05.575 [2024-11-16 18:47:48.858785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:05.575 [2024-11-16 18:47:48.858804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:05.575 [2024-11-16 18:47:48.859022] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:05.575 [2024-11-16 18:47:48.859211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:05.575 [2024-11-16 18:47:48.859230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:05.575 [2024-11-16 18:47:48.859396] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:05.575 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:05.576 "name": "raid_bdev1", 00:07:05.576 "uuid": "d9ab6b11-453a-4e4b-8ca7-41497fbc65c7", 00:07:05.576 "strip_size_kb": 64, 00:07:05.576 "state": "online", 00:07:05.576 "raid_level": "raid0", 00:07:05.576 "superblock": true, 00:07:05.576 "num_base_bdevs": 2, 00:07:05.576 "num_base_bdevs_discovered": 2, 00:07:05.576 "num_base_bdevs_operational": 2, 00:07:05.576 "base_bdevs_list": [ 00:07:05.576 { 00:07:05.576 "name": "BaseBdev1", 00:07:05.576 "uuid": "efc16782-2270-53d0-898d-f3aca72bb452", 00:07:05.576 "is_configured": true, 00:07:05.576 "data_offset": 2048, 00:07:05.576 "data_size": 63488 00:07:05.576 }, 00:07:05.576 { 00:07:05.576 "name": "BaseBdev2", 00:07:05.576 "uuid": "2f5c1ea7-941c-5faf-8454-b0be52493fbb", 00:07:05.576 "is_configured": true, 00:07:05.576 "data_offset": 2048, 00:07:05.576 "data_size": 63488 00:07:05.576 } 00:07:05.576 ] 00:07:05.576 }' 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:05.576 18:47:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:05.849 18:47:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:05.849 18:47:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:06.108 [2024-11-16 18:47:49.397069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:07.044 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:07.044 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.044 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:07.045 "name": "raid_bdev1", 00:07:07.045 "uuid": "d9ab6b11-453a-4e4b-8ca7-41497fbc65c7", 00:07:07.045 "strip_size_kb": 64, 00:07:07.045 "state": "online", 00:07:07.045 "raid_level": "raid0", 00:07:07.045 "superblock": true, 00:07:07.045 "num_base_bdevs": 2, 00:07:07.045 "num_base_bdevs_discovered": 2, 00:07:07.045 "num_base_bdevs_operational": 2, 00:07:07.045 "base_bdevs_list": [ 00:07:07.045 { 00:07:07.045 "name": "BaseBdev1", 00:07:07.045 "uuid": "efc16782-2270-53d0-898d-f3aca72bb452", 00:07:07.045 "is_configured": true, 00:07:07.045 "data_offset": 2048, 00:07:07.045 "data_size": 63488 00:07:07.045 }, 00:07:07.045 { 00:07:07.045 "name": "BaseBdev2", 00:07:07.045 "uuid": "2f5c1ea7-941c-5faf-8454-b0be52493fbb", 00:07:07.045 "is_configured": true, 00:07:07.045 "data_offset": 2048, 00:07:07.045 "data_size": 63488 00:07:07.045 } 00:07:07.045 ] 00:07:07.045 }' 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:07.045 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:07.305 [2024-11-16 18:47:50.724330] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:07.305 [2024-11-16 18:47:50.724444] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:07.305 [2024-11-16 18:47:50.727279] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:07.305 [2024-11-16 18:47:50.727368] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:07.305 [2024-11-16 18:47:50.727422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:07.305 [2024-11-16 18:47:50.727463] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:07.305 { 00:07:07.305 "results": [ 00:07:07.305 { 00:07:07.305 "job": "raid_bdev1", 00:07:07.305 "core_mask": "0x1", 00:07:07.305 "workload": "randrw", 00:07:07.305 "percentage": 50, 00:07:07.305 "status": "finished", 00:07:07.305 "queue_depth": 1, 00:07:07.305 "io_size": 131072, 00:07:07.305 "runtime": 1.32822, 00:07:07.305 "iops": 16412.190751532125, 00:07:07.305 "mibps": 2051.5238439415157, 00:07:07.305 "io_failed": 1, 00:07:07.305 "io_timeout": 0, 00:07:07.305 "avg_latency_us": 84.63403229037299, 00:07:07.305 "min_latency_us": 26.047161572052403, 00:07:07.305 "max_latency_us": 1395.1441048034935 00:07:07.305 } 00:07:07.305 ], 00:07:07.305 "core_count": 1 00:07:07.305 } 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61299 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61299 ']' 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61299 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61299 00:07:07.305 killing process with pid 61299 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61299' 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61299 00:07:07.305 [2024-11-16 18:47:50.754923] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:07.305 18:47:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61299 00:07:07.565 [2024-11-16 18:47:50.891772] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1icYtNIrzF 00:07:08.982 ************************************ 00:07:08.982 END TEST raid_read_error_test 00:07:08.982 ************************************ 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:08.982 00:07:08.982 real 0m4.225s 00:07:08.982 user 0m5.037s 00:07:08.982 sys 0m0.519s 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.982 18:47:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.982 18:47:52 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:08.982 18:47:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:08.982 18:47:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.982 18:47:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:08.982 ************************************ 00:07:08.982 START TEST raid_write_error_test 00:07:08.982 ************************************ 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:08.982 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Sv3Fj3efZT 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61439 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61439 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61439 ']' 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.983 18:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:08.983 [2024-11-16 18:47:52.181382] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:08.983 [2024-11-16 18:47:52.181504] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61439 ] 00:07:08.983 [2024-11-16 18:47:52.355274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.242 [2024-11-16 18:47:52.465164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.242 [2024-11-16 18:47:52.662219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.242 [2024-11-16 18:47:52.662288] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:09.810 18:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.810 18:47:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.810 BaseBdev1_malloc 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.810 true 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.810 [2024-11-16 18:47:53.058678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:09.810 [2024-11-16 18:47:53.058729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.810 [2024-11-16 18:47:53.058748] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:09.810 [2024-11-16 18:47:53.058758] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.810 [2024-11-16 18:47:53.060740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.810 [2024-11-16 18:47:53.060778] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:09.810 BaseBdev1 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.810 BaseBdev2_malloc 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.810 true 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.810 [2024-11-16 18:47:53.124901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:09.810 [2024-11-16 18:47:53.124965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:09.810 [2024-11-16 18:47:53.124991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:09.810 [2024-11-16 18:47:53.125009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:09.810 [2024-11-16 18:47:53.127183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:09.810 [2024-11-16 18:47:53.127222] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:09.810 BaseBdev2 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.810 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.810 [2024-11-16 18:47:53.136914] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:09.810 [2024-11-16 18:47:53.138786] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:09.810 [2024-11-16 18:47:53.138961] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:09.810 [2024-11-16 18:47:53.138977] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:09.810 [2024-11-16 18:47:53.139199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:09.810 [2024-11-16 18:47:53.139376] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:09.811 [2024-11-16 18:47:53.139388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:09.811 [2024-11-16 18:47:53.139537] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:09.811 "name": "raid_bdev1", 00:07:09.811 "uuid": "75daf1d6-173a-4d29-ad74-35a30f37bbe5", 00:07:09.811 "strip_size_kb": 64, 00:07:09.811 "state": "online", 00:07:09.811 "raid_level": "raid0", 00:07:09.811 "superblock": true, 00:07:09.811 "num_base_bdevs": 2, 00:07:09.811 "num_base_bdevs_discovered": 2, 00:07:09.811 "num_base_bdevs_operational": 2, 00:07:09.811 "base_bdevs_list": [ 00:07:09.811 { 00:07:09.811 "name": "BaseBdev1", 00:07:09.811 "uuid": "3fea3190-fae8-59da-bbfe-049bffa33d8d", 00:07:09.811 "is_configured": true, 00:07:09.811 "data_offset": 2048, 00:07:09.811 "data_size": 63488 00:07:09.811 }, 00:07:09.811 { 00:07:09.811 "name": "BaseBdev2", 00:07:09.811 "uuid": "babaade9-4feb-57dc-b95b-b431fc30cb5d", 00:07:09.811 "is_configured": true, 00:07:09.811 "data_offset": 2048, 00:07:09.811 "data_size": 63488 00:07:09.811 } 00:07:09.811 ] 00:07:09.811 }' 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:09.811 18:47:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.379 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:10.379 18:47:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:10.379 [2024-11-16 18:47:53.641308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:11.317 "name": "raid_bdev1", 00:07:11.317 "uuid": "75daf1d6-173a-4d29-ad74-35a30f37bbe5", 00:07:11.317 "strip_size_kb": 64, 00:07:11.317 "state": "online", 00:07:11.317 "raid_level": "raid0", 00:07:11.317 "superblock": true, 00:07:11.317 "num_base_bdevs": 2, 00:07:11.317 "num_base_bdevs_discovered": 2, 00:07:11.317 "num_base_bdevs_operational": 2, 00:07:11.317 "base_bdevs_list": [ 00:07:11.317 { 00:07:11.317 "name": "BaseBdev1", 00:07:11.317 "uuid": "3fea3190-fae8-59da-bbfe-049bffa33d8d", 00:07:11.317 "is_configured": true, 00:07:11.317 "data_offset": 2048, 00:07:11.317 "data_size": 63488 00:07:11.317 }, 00:07:11.317 { 00:07:11.317 "name": "BaseBdev2", 00:07:11.317 "uuid": "babaade9-4feb-57dc-b95b-b431fc30cb5d", 00:07:11.317 "is_configured": true, 00:07:11.317 "data_offset": 2048, 00:07:11.317 "data_size": 63488 00:07:11.317 } 00:07:11.317 ] 00:07:11.317 }' 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:11.317 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.577 [2024-11-16 18:47:54.984746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:11.577 [2024-11-16 18:47:54.984782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:11.577 [2024-11-16 18:47:54.987423] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.577 [2024-11-16 18:47:54.987462] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.577 [2024-11-16 18:47:54.987494] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.577 [2024-11-16 18:47:54.987505] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:11.577 { 00:07:11.577 "results": [ 00:07:11.577 { 00:07:11.577 "job": "raid_bdev1", 00:07:11.577 "core_mask": "0x1", 00:07:11.577 "workload": "randrw", 00:07:11.577 "percentage": 50, 00:07:11.577 "status": "finished", 00:07:11.577 "queue_depth": 1, 00:07:11.577 "io_size": 131072, 00:07:11.577 "runtime": 1.344268, 00:07:11.577 "iops": 16800.96528370831, 00:07:11.577 "mibps": 2100.1206604635386, 00:07:11.577 "io_failed": 1, 00:07:11.577 "io_timeout": 0, 00:07:11.577 "avg_latency_us": 82.70141019459052, 00:07:11.577 "min_latency_us": 25.823580786026202, 00:07:11.577 "max_latency_us": 1395.1441048034935 00:07:11.577 } 00:07:11.577 ], 00:07:11.577 "core_count": 1 00:07:11.577 } 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61439 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61439 ']' 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61439 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.577 18:47:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61439 00:07:11.577 18:47:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.577 18:47:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.577 18:47:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61439' 00:07:11.577 killing process with pid 61439 00:07:11.577 18:47:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61439 00:07:11.577 [2024-11-16 18:47:55.018316] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.577 18:47:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61439 00:07:11.836 [2024-11-16 18:47:55.144794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Sv3Fj3efZT 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:13.216 00:07:13.216 real 0m4.183s 00:07:13.216 user 0m4.979s 00:07:13.216 sys 0m0.514s 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:13.216 ************************************ 00:07:13.216 END TEST raid_write_error_test 00:07:13.216 ************************************ 00:07:13.216 18:47:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.216 18:47:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:13.216 18:47:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:13.216 18:47:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:13.216 18:47:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:13.216 18:47:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:13.216 ************************************ 00:07:13.216 START TEST raid_state_function_test 00:07:13.216 ************************************ 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:13.216 Process raid pid: 61577 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61577 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61577' 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61577 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61577 ']' 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.216 18:47:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:13.216 [2024-11-16 18:47:56.424262] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:13.216 [2024-11-16 18:47:56.424454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:13.216 [2024-11-16 18:47:56.600514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.475 [2024-11-16 18:47:56.707925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.475 [2024-11-16 18:47:56.899194] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.475 [2024-11-16 18:47:56.899226] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.044 [2024-11-16 18:47:57.248596] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.044 [2024-11-16 18:47:57.248661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.044 [2024-11-16 18:47:57.248673] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.044 [2024-11-16 18:47:57.248688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.044 "name": "Existed_Raid", 00:07:14.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.044 "strip_size_kb": 64, 00:07:14.044 "state": "configuring", 00:07:14.044 "raid_level": "concat", 00:07:14.044 "superblock": false, 00:07:14.044 "num_base_bdevs": 2, 00:07:14.044 "num_base_bdevs_discovered": 0, 00:07:14.044 "num_base_bdevs_operational": 2, 00:07:14.044 "base_bdevs_list": [ 00:07:14.044 { 00:07:14.044 "name": "BaseBdev1", 00:07:14.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.044 "is_configured": false, 00:07:14.044 "data_offset": 0, 00:07:14.044 "data_size": 0 00:07:14.044 }, 00:07:14.044 { 00:07:14.044 "name": "BaseBdev2", 00:07:14.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.044 "is_configured": false, 00:07:14.044 "data_offset": 0, 00:07:14.044 "data_size": 0 00:07:14.044 } 00:07:14.044 ] 00:07:14.044 }' 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.044 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.304 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.305 [2024-11-16 18:47:57.648015] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.305 [2024-11-16 18:47:57.648099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.305 [2024-11-16 18:47:57.659995] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:14.305 [2024-11-16 18:47:57.660076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:14.305 [2024-11-16 18:47:57.660105] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.305 [2024-11-16 18:47:57.660131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.305 [2024-11-16 18:47:57.704864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.305 BaseBdev1 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.305 [ 00:07:14.305 { 00:07:14.305 "name": "BaseBdev1", 00:07:14.305 "aliases": [ 00:07:14.305 "b8d0f9be-4010-48c8-a13d-32ee77b170f5" 00:07:14.305 ], 00:07:14.305 "product_name": "Malloc disk", 00:07:14.305 "block_size": 512, 00:07:14.305 "num_blocks": 65536, 00:07:14.305 "uuid": "b8d0f9be-4010-48c8-a13d-32ee77b170f5", 00:07:14.305 "assigned_rate_limits": { 00:07:14.305 "rw_ios_per_sec": 0, 00:07:14.305 "rw_mbytes_per_sec": 0, 00:07:14.305 "r_mbytes_per_sec": 0, 00:07:14.305 "w_mbytes_per_sec": 0 00:07:14.305 }, 00:07:14.305 "claimed": true, 00:07:14.305 "claim_type": "exclusive_write", 00:07:14.305 "zoned": false, 00:07:14.305 "supported_io_types": { 00:07:14.305 "read": true, 00:07:14.305 "write": true, 00:07:14.305 "unmap": true, 00:07:14.305 "flush": true, 00:07:14.305 "reset": true, 00:07:14.305 "nvme_admin": false, 00:07:14.305 "nvme_io": false, 00:07:14.305 "nvme_io_md": false, 00:07:14.305 "write_zeroes": true, 00:07:14.305 "zcopy": true, 00:07:14.305 "get_zone_info": false, 00:07:14.305 "zone_management": false, 00:07:14.305 "zone_append": false, 00:07:14.305 "compare": false, 00:07:14.305 "compare_and_write": false, 00:07:14.305 "abort": true, 00:07:14.305 "seek_hole": false, 00:07:14.305 "seek_data": false, 00:07:14.305 "copy": true, 00:07:14.305 "nvme_iov_md": false 00:07:14.305 }, 00:07:14.305 "memory_domains": [ 00:07:14.305 { 00:07:14.305 "dma_device_id": "system", 00:07:14.305 "dma_device_type": 1 00:07:14.305 }, 00:07:14.305 { 00:07:14.305 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:14.305 "dma_device_type": 2 00:07:14.305 } 00:07:14.305 ], 00:07:14.305 "driver_specific": {} 00:07:14.305 } 00:07:14.305 ] 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.305 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.564 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.564 "name": "Existed_Raid", 00:07:14.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.564 "strip_size_kb": 64, 00:07:14.564 "state": "configuring", 00:07:14.564 "raid_level": "concat", 00:07:14.564 "superblock": false, 00:07:14.564 "num_base_bdevs": 2, 00:07:14.564 "num_base_bdevs_discovered": 1, 00:07:14.564 "num_base_bdevs_operational": 2, 00:07:14.564 "base_bdevs_list": [ 00:07:14.564 { 00:07:14.564 "name": "BaseBdev1", 00:07:14.564 "uuid": "b8d0f9be-4010-48c8-a13d-32ee77b170f5", 00:07:14.564 "is_configured": true, 00:07:14.564 "data_offset": 0, 00:07:14.564 "data_size": 65536 00:07:14.564 }, 00:07:14.564 { 00:07:14.564 "name": "BaseBdev2", 00:07:14.564 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.564 "is_configured": false, 00:07:14.564 "data_offset": 0, 00:07:14.564 "data_size": 0 00:07:14.564 } 00:07:14.564 ] 00:07:14.564 }' 00:07:14.564 18:47:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.564 18:47:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.823 [2024-11-16 18:47:58.108174] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:14.823 [2024-11-16 18:47:58.108213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.823 [2024-11-16 18:47:58.120206] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:14.823 [2024-11-16 18:47:58.121919] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:14.823 [2024-11-16 18:47:58.121953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:14.823 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:14.824 "name": "Existed_Raid", 00:07:14.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.824 "strip_size_kb": 64, 00:07:14.824 "state": "configuring", 00:07:14.824 "raid_level": "concat", 00:07:14.824 "superblock": false, 00:07:14.824 "num_base_bdevs": 2, 00:07:14.824 "num_base_bdevs_discovered": 1, 00:07:14.824 "num_base_bdevs_operational": 2, 00:07:14.824 "base_bdevs_list": [ 00:07:14.824 { 00:07:14.824 "name": "BaseBdev1", 00:07:14.824 "uuid": "b8d0f9be-4010-48c8-a13d-32ee77b170f5", 00:07:14.824 "is_configured": true, 00:07:14.824 "data_offset": 0, 00:07:14.824 "data_size": 65536 00:07:14.824 }, 00:07:14.824 { 00:07:14.824 "name": "BaseBdev2", 00:07:14.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:14.824 "is_configured": false, 00:07:14.824 "data_offset": 0, 00:07:14.824 "data_size": 0 00:07:14.824 } 00:07:14.824 ] 00:07:14.824 }' 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:14.824 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.083 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:15.083 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.083 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.343 [2024-11-16 18:47:58.583164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:15.343 [2024-11-16 18:47:58.583285] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:15.343 [2024-11-16 18:47:58.583311] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:15.343 [2024-11-16 18:47:58.583606] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.343 [2024-11-16 18:47:58.583848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:15.343 [2024-11-16 18:47:58.583911] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:15.343 [2024-11-16 18:47:58.584240] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.343 BaseBdev2 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.343 [ 00:07:15.343 { 00:07:15.343 "name": "BaseBdev2", 00:07:15.343 "aliases": [ 00:07:15.343 "a2d9a617-fb83-4121-ab4e-b85bf1b3c301" 00:07:15.343 ], 00:07:15.343 "product_name": "Malloc disk", 00:07:15.343 "block_size": 512, 00:07:15.343 "num_blocks": 65536, 00:07:15.343 "uuid": "a2d9a617-fb83-4121-ab4e-b85bf1b3c301", 00:07:15.343 "assigned_rate_limits": { 00:07:15.343 "rw_ios_per_sec": 0, 00:07:15.343 "rw_mbytes_per_sec": 0, 00:07:15.343 "r_mbytes_per_sec": 0, 00:07:15.343 "w_mbytes_per_sec": 0 00:07:15.343 }, 00:07:15.343 "claimed": true, 00:07:15.343 "claim_type": "exclusive_write", 00:07:15.343 "zoned": false, 00:07:15.343 "supported_io_types": { 00:07:15.343 "read": true, 00:07:15.343 "write": true, 00:07:15.343 "unmap": true, 00:07:15.343 "flush": true, 00:07:15.343 "reset": true, 00:07:15.343 "nvme_admin": false, 00:07:15.343 "nvme_io": false, 00:07:15.343 "nvme_io_md": false, 00:07:15.343 "write_zeroes": true, 00:07:15.343 "zcopy": true, 00:07:15.343 "get_zone_info": false, 00:07:15.343 "zone_management": false, 00:07:15.343 "zone_append": false, 00:07:15.343 "compare": false, 00:07:15.343 "compare_and_write": false, 00:07:15.343 "abort": true, 00:07:15.343 "seek_hole": false, 00:07:15.343 "seek_data": false, 00:07:15.343 "copy": true, 00:07:15.343 "nvme_iov_md": false 00:07:15.343 }, 00:07:15.343 "memory_domains": [ 00:07:15.343 { 00:07:15.343 "dma_device_id": "system", 00:07:15.343 "dma_device_type": 1 00:07:15.343 }, 00:07:15.343 { 00:07:15.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.343 "dma_device_type": 2 00:07:15.343 } 00:07:15.343 ], 00:07:15.343 "driver_specific": {} 00:07:15.343 } 00:07:15.343 ] 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:15.343 "name": "Existed_Raid", 00:07:15.343 "uuid": "e30f43be-9f4f-4834-8aad-2e095987ac94", 00:07:15.343 "strip_size_kb": 64, 00:07:15.343 "state": "online", 00:07:15.343 "raid_level": "concat", 00:07:15.343 "superblock": false, 00:07:15.343 "num_base_bdevs": 2, 00:07:15.343 "num_base_bdevs_discovered": 2, 00:07:15.343 "num_base_bdevs_operational": 2, 00:07:15.343 "base_bdevs_list": [ 00:07:15.343 { 00:07:15.343 "name": "BaseBdev1", 00:07:15.343 "uuid": "b8d0f9be-4010-48c8-a13d-32ee77b170f5", 00:07:15.343 "is_configured": true, 00:07:15.343 "data_offset": 0, 00:07:15.343 "data_size": 65536 00:07:15.343 }, 00:07:15.343 { 00:07:15.343 "name": "BaseBdev2", 00:07:15.343 "uuid": "a2d9a617-fb83-4121-ab4e-b85bf1b3c301", 00:07:15.343 "is_configured": true, 00:07:15.343 "data_offset": 0, 00:07:15.343 "data_size": 65536 00:07:15.343 } 00:07:15.343 ] 00:07:15.343 }' 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:15.343 18:47:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:15.603 [2024-11-16 18:47:59.030635] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.603 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:15.603 "name": "Existed_Raid", 00:07:15.603 "aliases": [ 00:07:15.603 "e30f43be-9f4f-4834-8aad-2e095987ac94" 00:07:15.604 ], 00:07:15.604 "product_name": "Raid Volume", 00:07:15.604 "block_size": 512, 00:07:15.604 "num_blocks": 131072, 00:07:15.604 "uuid": "e30f43be-9f4f-4834-8aad-2e095987ac94", 00:07:15.604 "assigned_rate_limits": { 00:07:15.604 "rw_ios_per_sec": 0, 00:07:15.604 "rw_mbytes_per_sec": 0, 00:07:15.604 "r_mbytes_per_sec": 0, 00:07:15.604 "w_mbytes_per_sec": 0 00:07:15.604 }, 00:07:15.604 "claimed": false, 00:07:15.604 "zoned": false, 00:07:15.604 "supported_io_types": { 00:07:15.604 "read": true, 00:07:15.604 "write": true, 00:07:15.604 "unmap": true, 00:07:15.604 "flush": true, 00:07:15.604 "reset": true, 00:07:15.604 "nvme_admin": false, 00:07:15.604 "nvme_io": false, 00:07:15.604 "nvme_io_md": false, 00:07:15.604 "write_zeroes": true, 00:07:15.604 "zcopy": false, 00:07:15.604 "get_zone_info": false, 00:07:15.604 "zone_management": false, 00:07:15.604 "zone_append": false, 00:07:15.604 "compare": false, 00:07:15.604 "compare_and_write": false, 00:07:15.604 "abort": false, 00:07:15.604 "seek_hole": false, 00:07:15.604 "seek_data": false, 00:07:15.604 "copy": false, 00:07:15.604 "nvme_iov_md": false 00:07:15.604 }, 00:07:15.604 "memory_domains": [ 00:07:15.604 { 00:07:15.604 "dma_device_id": "system", 00:07:15.604 "dma_device_type": 1 00:07:15.604 }, 00:07:15.604 { 00:07:15.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.604 "dma_device_type": 2 00:07:15.604 }, 00:07:15.604 { 00:07:15.604 "dma_device_id": "system", 00:07:15.604 "dma_device_type": 1 00:07:15.604 }, 00:07:15.604 { 00:07:15.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.604 "dma_device_type": 2 00:07:15.604 } 00:07:15.604 ], 00:07:15.604 "driver_specific": { 00:07:15.604 "raid": { 00:07:15.604 "uuid": "e30f43be-9f4f-4834-8aad-2e095987ac94", 00:07:15.604 "strip_size_kb": 64, 00:07:15.604 "state": "online", 00:07:15.604 "raid_level": "concat", 00:07:15.604 "superblock": false, 00:07:15.604 "num_base_bdevs": 2, 00:07:15.604 "num_base_bdevs_discovered": 2, 00:07:15.604 "num_base_bdevs_operational": 2, 00:07:15.604 "base_bdevs_list": [ 00:07:15.604 { 00:07:15.604 "name": "BaseBdev1", 00:07:15.604 "uuid": "b8d0f9be-4010-48c8-a13d-32ee77b170f5", 00:07:15.604 "is_configured": true, 00:07:15.604 "data_offset": 0, 00:07:15.604 "data_size": 65536 00:07:15.604 }, 00:07:15.604 { 00:07:15.604 "name": "BaseBdev2", 00:07:15.604 "uuid": "a2d9a617-fb83-4121-ab4e-b85bf1b3c301", 00:07:15.604 "is_configured": true, 00:07:15.604 "data_offset": 0, 00:07:15.604 "data_size": 65536 00:07:15.604 } 00:07:15.604 ] 00:07:15.604 } 00:07:15.604 } 00:07:15.604 }' 00:07:15.604 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.874 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:15.874 BaseBdev2' 00:07:15.874 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.875 [2024-11-16 18:47:59.226139] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:15.875 [2024-11-16 18:47:59.226213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.875 [2024-11-16 18:47:59.226301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:15.875 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:15.877 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:15.877 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:15.877 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:15.877 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:15.877 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:15.877 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:15.878 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:15.878 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.878 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.139 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.139 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:16.139 "name": "Existed_Raid", 00:07:16.139 "uuid": "e30f43be-9f4f-4834-8aad-2e095987ac94", 00:07:16.139 "strip_size_kb": 64, 00:07:16.139 "state": "offline", 00:07:16.139 "raid_level": "concat", 00:07:16.139 "superblock": false, 00:07:16.139 "num_base_bdevs": 2, 00:07:16.139 "num_base_bdevs_discovered": 1, 00:07:16.139 "num_base_bdevs_operational": 1, 00:07:16.139 "base_bdevs_list": [ 00:07:16.139 { 00:07:16.139 "name": null, 00:07:16.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.139 "is_configured": false, 00:07:16.139 "data_offset": 0, 00:07:16.139 "data_size": 65536 00:07:16.139 }, 00:07:16.139 { 00:07:16.139 "name": "BaseBdev2", 00:07:16.139 "uuid": "a2d9a617-fb83-4121-ab4e-b85bf1b3c301", 00:07:16.139 "is_configured": true, 00:07:16.139 "data_offset": 0, 00:07:16.139 "data_size": 65536 00:07:16.139 } 00:07:16.139 ] 00:07:16.139 }' 00:07:16.139 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:16.139 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.398 [2024-11-16 18:47:59.760187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:16.398 [2024-11-16 18:47:59.760239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.398 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61577 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61577 ']' 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61577 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61577 00:07:16.657 killing process with pid 61577 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61577' 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61577 00:07:16.657 [2024-11-16 18:47:59.938669] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.657 18:47:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61577 00:07:16.657 [2024-11-16 18:47:59.954700] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:17.596 ************************************ 00:07:17.596 END TEST raid_state_function_test 00:07:17.596 ************************************ 00:07:17.596 18:48:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:17.596 00:07:17.596 real 0m4.667s 00:07:17.596 user 0m6.656s 00:07:17.596 sys 0m0.783s 00:07:17.596 18:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:17.596 18:48:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:17.596 18:48:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:17.596 18:48:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:17.596 18:48:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:17.596 18:48:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:17.596 ************************************ 00:07:17.596 START TEST raid_state_function_test_sb 00:07:17.596 ************************************ 00:07:17.596 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:07:17.596 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:17.596 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:17.596 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:17.596 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:17.856 Process raid pid: 61825 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61825 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61825' 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61825 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61825 ']' 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.856 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:17.856 [2024-11-16 18:48:01.158579] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:17.856 [2024-11-16 18:48:01.158813] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.115 [2024-11-16 18:48:01.335279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.115 [2024-11-16 18:48:01.446763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.374 [2024-11-16 18:48:01.639457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.374 [2024-11-16 18:48:01.639540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.634 [2024-11-16 18:48:01.975187] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:18.634 [2024-11-16 18:48:01.975314] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:18.634 [2024-11-16 18:48:01.975344] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.634 [2024-11-16 18:48:01.975367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.634 18:48:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.634 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:18.634 "name": "Existed_Raid", 00:07:18.634 "uuid": "3f0396fe-59b1-442e-8604-e66e64968a0f", 00:07:18.634 "strip_size_kb": 64, 00:07:18.634 "state": "configuring", 00:07:18.634 "raid_level": "concat", 00:07:18.634 "superblock": true, 00:07:18.634 "num_base_bdevs": 2, 00:07:18.634 "num_base_bdevs_discovered": 0, 00:07:18.634 "num_base_bdevs_operational": 2, 00:07:18.634 "base_bdevs_list": [ 00:07:18.634 { 00:07:18.634 "name": "BaseBdev1", 00:07:18.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.634 "is_configured": false, 00:07:18.634 "data_offset": 0, 00:07:18.634 "data_size": 0 00:07:18.634 }, 00:07:18.634 { 00:07:18.634 "name": "BaseBdev2", 00:07:18.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.634 "is_configured": false, 00:07:18.634 "data_offset": 0, 00:07:18.634 "data_size": 0 00:07:18.634 } 00:07:18.634 ] 00:07:18.634 }' 00:07:18.634 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:18.634 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.203 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.203 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.203 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.203 [2024-11-16 18:48:02.418364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.203 [2024-11-16 18:48:02.418440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.204 [2024-11-16 18:48:02.430339] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:19.204 [2024-11-16 18:48:02.430412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:19.204 [2024-11-16 18:48:02.430439] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.204 [2024-11-16 18:48:02.430463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.204 [2024-11-16 18:48:02.476300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.204 BaseBdev1 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.204 [ 00:07:19.204 { 00:07:19.204 "name": "BaseBdev1", 00:07:19.204 "aliases": [ 00:07:19.204 "c27a0988-e2d8-48d0-80fc-ad76aa4a700b" 00:07:19.204 ], 00:07:19.204 "product_name": "Malloc disk", 00:07:19.204 "block_size": 512, 00:07:19.204 "num_blocks": 65536, 00:07:19.204 "uuid": "c27a0988-e2d8-48d0-80fc-ad76aa4a700b", 00:07:19.204 "assigned_rate_limits": { 00:07:19.204 "rw_ios_per_sec": 0, 00:07:19.204 "rw_mbytes_per_sec": 0, 00:07:19.204 "r_mbytes_per_sec": 0, 00:07:19.204 "w_mbytes_per_sec": 0 00:07:19.204 }, 00:07:19.204 "claimed": true, 00:07:19.204 "claim_type": "exclusive_write", 00:07:19.204 "zoned": false, 00:07:19.204 "supported_io_types": { 00:07:19.204 "read": true, 00:07:19.204 "write": true, 00:07:19.204 "unmap": true, 00:07:19.204 "flush": true, 00:07:19.204 "reset": true, 00:07:19.204 "nvme_admin": false, 00:07:19.204 "nvme_io": false, 00:07:19.204 "nvme_io_md": false, 00:07:19.204 "write_zeroes": true, 00:07:19.204 "zcopy": true, 00:07:19.204 "get_zone_info": false, 00:07:19.204 "zone_management": false, 00:07:19.204 "zone_append": false, 00:07:19.204 "compare": false, 00:07:19.204 "compare_and_write": false, 00:07:19.204 "abort": true, 00:07:19.204 "seek_hole": false, 00:07:19.204 "seek_data": false, 00:07:19.204 "copy": true, 00:07:19.204 "nvme_iov_md": false 00:07:19.204 }, 00:07:19.204 "memory_domains": [ 00:07:19.204 { 00:07:19.204 "dma_device_id": "system", 00:07:19.204 "dma_device_type": 1 00:07:19.204 }, 00:07:19.204 { 00:07:19.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.204 "dma_device_type": 2 00:07:19.204 } 00:07:19.204 ], 00:07:19.204 "driver_specific": {} 00:07:19.204 } 00:07:19.204 ] 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.204 "name": "Existed_Raid", 00:07:19.204 "uuid": "82be0a03-ea1a-430f-865a-5fd3baa4c28c", 00:07:19.204 "strip_size_kb": 64, 00:07:19.204 "state": "configuring", 00:07:19.204 "raid_level": "concat", 00:07:19.204 "superblock": true, 00:07:19.204 "num_base_bdevs": 2, 00:07:19.204 "num_base_bdevs_discovered": 1, 00:07:19.204 "num_base_bdevs_operational": 2, 00:07:19.204 "base_bdevs_list": [ 00:07:19.204 { 00:07:19.204 "name": "BaseBdev1", 00:07:19.204 "uuid": "c27a0988-e2d8-48d0-80fc-ad76aa4a700b", 00:07:19.204 "is_configured": true, 00:07:19.204 "data_offset": 2048, 00:07:19.204 "data_size": 63488 00:07:19.204 }, 00:07:19.204 { 00:07:19.204 "name": "BaseBdev2", 00:07:19.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.204 "is_configured": false, 00:07:19.204 "data_offset": 0, 00:07:19.204 "data_size": 0 00:07:19.204 } 00:07:19.204 ] 00:07:19.204 }' 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.204 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.464 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:19.464 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.464 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.464 [2024-11-16 18:48:02.927583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:19.464 [2024-11-16 18:48:02.927690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:19.464 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.464 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:19.464 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.464 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.724 [2024-11-16 18:48:02.939642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:19.724 [2024-11-16 18:48:02.941437] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:19.724 [2024-11-16 18:48:02.941511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.724 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.725 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.725 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.725 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:19.725 "name": "Existed_Raid", 00:07:19.725 "uuid": "9a2fb400-864a-4961-84ac-5af5c954984e", 00:07:19.725 "strip_size_kb": 64, 00:07:19.725 "state": "configuring", 00:07:19.725 "raid_level": "concat", 00:07:19.725 "superblock": true, 00:07:19.725 "num_base_bdevs": 2, 00:07:19.725 "num_base_bdevs_discovered": 1, 00:07:19.725 "num_base_bdevs_operational": 2, 00:07:19.725 "base_bdevs_list": [ 00:07:19.725 { 00:07:19.725 "name": "BaseBdev1", 00:07:19.725 "uuid": "c27a0988-e2d8-48d0-80fc-ad76aa4a700b", 00:07:19.725 "is_configured": true, 00:07:19.725 "data_offset": 2048, 00:07:19.725 "data_size": 63488 00:07:19.725 }, 00:07:19.725 { 00:07:19.725 "name": "BaseBdev2", 00:07:19.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:19.725 "is_configured": false, 00:07:19.725 "data_offset": 0, 00:07:19.725 "data_size": 0 00:07:19.725 } 00:07:19.725 ] 00:07:19.725 }' 00:07:19.725 18:48:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:19.725 18:48:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.985 [2024-11-16 18:48:03.365475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:19.985 [2024-11-16 18:48:03.365738] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:19.985 [2024-11-16 18:48:03.365755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:19.985 [2024-11-16 18:48:03.366026] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:19.985 [2024-11-16 18:48:03.366174] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:19.985 [2024-11-16 18:48:03.366187] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:19.985 BaseBdev2 00:07:19.985 [2024-11-16 18:48:03.366331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.985 [ 00:07:19.985 { 00:07:19.985 "name": "BaseBdev2", 00:07:19.985 "aliases": [ 00:07:19.985 "a4cb3efa-dc33-44e4-a803-1543a31d7acc" 00:07:19.985 ], 00:07:19.985 "product_name": "Malloc disk", 00:07:19.985 "block_size": 512, 00:07:19.985 "num_blocks": 65536, 00:07:19.985 "uuid": "a4cb3efa-dc33-44e4-a803-1543a31d7acc", 00:07:19.985 "assigned_rate_limits": { 00:07:19.985 "rw_ios_per_sec": 0, 00:07:19.985 "rw_mbytes_per_sec": 0, 00:07:19.985 "r_mbytes_per_sec": 0, 00:07:19.985 "w_mbytes_per_sec": 0 00:07:19.985 }, 00:07:19.985 "claimed": true, 00:07:19.985 "claim_type": "exclusive_write", 00:07:19.985 "zoned": false, 00:07:19.985 "supported_io_types": { 00:07:19.985 "read": true, 00:07:19.985 "write": true, 00:07:19.985 "unmap": true, 00:07:19.985 "flush": true, 00:07:19.985 "reset": true, 00:07:19.985 "nvme_admin": false, 00:07:19.985 "nvme_io": false, 00:07:19.985 "nvme_io_md": false, 00:07:19.985 "write_zeroes": true, 00:07:19.985 "zcopy": true, 00:07:19.985 "get_zone_info": false, 00:07:19.985 "zone_management": false, 00:07:19.985 "zone_append": false, 00:07:19.985 "compare": false, 00:07:19.985 "compare_and_write": false, 00:07:19.985 "abort": true, 00:07:19.985 "seek_hole": false, 00:07:19.985 "seek_data": false, 00:07:19.985 "copy": true, 00:07:19.985 "nvme_iov_md": false 00:07:19.985 }, 00:07:19.985 "memory_domains": [ 00:07:19.985 { 00:07:19.985 "dma_device_id": "system", 00:07:19.985 "dma_device_type": 1 00:07:19.985 }, 00:07:19.985 { 00:07:19.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.985 "dma_device_type": 2 00:07:19.985 } 00:07:19.985 ], 00:07:19.985 "driver_specific": {} 00:07:19.985 } 00:07:19.985 ] 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:19.985 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:19.986 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.986 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.986 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:19.986 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.246 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.246 "name": "Existed_Raid", 00:07:20.246 "uuid": "9a2fb400-864a-4961-84ac-5af5c954984e", 00:07:20.246 "strip_size_kb": 64, 00:07:20.246 "state": "online", 00:07:20.246 "raid_level": "concat", 00:07:20.246 "superblock": true, 00:07:20.246 "num_base_bdevs": 2, 00:07:20.246 "num_base_bdevs_discovered": 2, 00:07:20.246 "num_base_bdevs_operational": 2, 00:07:20.246 "base_bdevs_list": [ 00:07:20.246 { 00:07:20.246 "name": "BaseBdev1", 00:07:20.246 "uuid": "c27a0988-e2d8-48d0-80fc-ad76aa4a700b", 00:07:20.246 "is_configured": true, 00:07:20.246 "data_offset": 2048, 00:07:20.246 "data_size": 63488 00:07:20.246 }, 00:07:20.246 { 00:07:20.246 "name": "BaseBdev2", 00:07:20.246 "uuid": "a4cb3efa-dc33-44e4-a803-1543a31d7acc", 00:07:20.246 "is_configured": true, 00:07:20.246 "data_offset": 2048, 00:07:20.246 "data_size": 63488 00:07:20.246 } 00:07:20.246 ] 00:07:20.246 }' 00:07:20.246 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.246 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.507 [2024-11-16 18:48:03.836999] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:20.507 "name": "Existed_Raid", 00:07:20.507 "aliases": [ 00:07:20.507 "9a2fb400-864a-4961-84ac-5af5c954984e" 00:07:20.507 ], 00:07:20.507 "product_name": "Raid Volume", 00:07:20.507 "block_size": 512, 00:07:20.507 "num_blocks": 126976, 00:07:20.507 "uuid": "9a2fb400-864a-4961-84ac-5af5c954984e", 00:07:20.507 "assigned_rate_limits": { 00:07:20.507 "rw_ios_per_sec": 0, 00:07:20.507 "rw_mbytes_per_sec": 0, 00:07:20.507 "r_mbytes_per_sec": 0, 00:07:20.507 "w_mbytes_per_sec": 0 00:07:20.507 }, 00:07:20.507 "claimed": false, 00:07:20.507 "zoned": false, 00:07:20.507 "supported_io_types": { 00:07:20.507 "read": true, 00:07:20.507 "write": true, 00:07:20.507 "unmap": true, 00:07:20.507 "flush": true, 00:07:20.507 "reset": true, 00:07:20.507 "nvme_admin": false, 00:07:20.507 "nvme_io": false, 00:07:20.507 "nvme_io_md": false, 00:07:20.507 "write_zeroes": true, 00:07:20.507 "zcopy": false, 00:07:20.507 "get_zone_info": false, 00:07:20.507 "zone_management": false, 00:07:20.507 "zone_append": false, 00:07:20.507 "compare": false, 00:07:20.507 "compare_and_write": false, 00:07:20.507 "abort": false, 00:07:20.507 "seek_hole": false, 00:07:20.507 "seek_data": false, 00:07:20.507 "copy": false, 00:07:20.507 "nvme_iov_md": false 00:07:20.507 }, 00:07:20.507 "memory_domains": [ 00:07:20.507 { 00:07:20.507 "dma_device_id": "system", 00:07:20.507 "dma_device_type": 1 00:07:20.507 }, 00:07:20.507 { 00:07:20.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.507 "dma_device_type": 2 00:07:20.507 }, 00:07:20.507 { 00:07:20.507 "dma_device_id": "system", 00:07:20.507 "dma_device_type": 1 00:07:20.507 }, 00:07:20.507 { 00:07:20.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.507 "dma_device_type": 2 00:07:20.507 } 00:07:20.507 ], 00:07:20.507 "driver_specific": { 00:07:20.507 "raid": { 00:07:20.507 "uuid": "9a2fb400-864a-4961-84ac-5af5c954984e", 00:07:20.507 "strip_size_kb": 64, 00:07:20.507 "state": "online", 00:07:20.507 "raid_level": "concat", 00:07:20.507 "superblock": true, 00:07:20.507 "num_base_bdevs": 2, 00:07:20.507 "num_base_bdevs_discovered": 2, 00:07:20.507 "num_base_bdevs_operational": 2, 00:07:20.507 "base_bdevs_list": [ 00:07:20.507 { 00:07:20.507 "name": "BaseBdev1", 00:07:20.507 "uuid": "c27a0988-e2d8-48d0-80fc-ad76aa4a700b", 00:07:20.507 "is_configured": true, 00:07:20.507 "data_offset": 2048, 00:07:20.507 "data_size": 63488 00:07:20.507 }, 00:07:20.507 { 00:07:20.507 "name": "BaseBdev2", 00:07:20.507 "uuid": "a4cb3efa-dc33-44e4-a803-1543a31d7acc", 00:07:20.507 "is_configured": true, 00:07:20.507 "data_offset": 2048, 00:07:20.507 "data_size": 63488 00:07:20.507 } 00:07:20.507 ] 00:07:20.507 } 00:07:20.507 } 00:07:20.507 }' 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:20.507 BaseBdev2' 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.507 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:20.508 18:48:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.508 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.508 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.769 18:48:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.769 [2024-11-16 18:48:04.080301] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:20.769 [2024-11-16 18:48:04.080374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.769 [2024-11-16 18:48:04.080442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:20.769 "name": "Existed_Raid", 00:07:20.769 "uuid": "9a2fb400-864a-4961-84ac-5af5c954984e", 00:07:20.769 "strip_size_kb": 64, 00:07:20.769 "state": "offline", 00:07:20.769 "raid_level": "concat", 00:07:20.769 "superblock": true, 00:07:20.769 "num_base_bdevs": 2, 00:07:20.769 "num_base_bdevs_discovered": 1, 00:07:20.769 "num_base_bdevs_operational": 1, 00:07:20.769 "base_bdevs_list": [ 00:07:20.769 { 00:07:20.769 "name": null, 00:07:20.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:20.769 "is_configured": false, 00:07:20.769 "data_offset": 0, 00:07:20.769 "data_size": 63488 00:07:20.769 }, 00:07:20.769 { 00:07:20.769 "name": "BaseBdev2", 00:07:20.769 "uuid": "a4cb3efa-dc33-44e4-a803-1543a31d7acc", 00:07:20.769 "is_configured": true, 00:07:20.769 "data_offset": 2048, 00:07:20.769 "data_size": 63488 00:07:20.769 } 00:07:20.769 ] 00:07:20.769 }' 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:20.769 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.341 [2024-11-16 18:48:04.616253] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:21.341 [2024-11-16 18:48:04.616306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61825 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61825 ']' 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61825 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61825 00:07:21.341 killing process with pid 61825 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61825' 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61825 00:07:21.341 [2024-11-16 18:48:04.798187] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.341 18:48:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61825 00:07:21.601 [2024-11-16 18:48:04.814347] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.543 18:48:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:22.543 00:07:22.543 real 0m4.799s 00:07:22.543 user 0m6.947s 00:07:22.543 sys 0m0.752s 00:07:22.543 18:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.543 18:48:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.543 ************************************ 00:07:22.543 END TEST raid_state_function_test_sb 00:07:22.543 ************************************ 00:07:22.543 18:48:05 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:22.543 18:48:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:22.543 18:48:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.543 18:48:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.543 ************************************ 00:07:22.543 START TEST raid_superblock_test 00:07:22.543 ************************************ 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62077 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62077 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62077 ']' 00:07:22.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:22.543 18:48:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.803 [2024-11-16 18:48:06.023789] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:22.803 [2024-11-16 18:48:06.024001] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62077 ] 00:07:22.803 [2024-11-16 18:48:06.195645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.063 [2024-11-16 18:48:06.303467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.063 [2024-11-16 18:48:06.491403] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.063 [2024-11-16 18:48:06.491539] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.633 malloc1 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.633 [2024-11-16 18:48:06.888390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:23.633 [2024-11-16 18:48:06.888455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.633 [2024-11-16 18:48:06.888480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:23.633 [2024-11-16 18:48:06.888489] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.633 [2024-11-16 18:48:06.890486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.633 [2024-11-16 18:48:06.890522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:23.633 pt1 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:23.633 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.634 malloc2 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.634 [2024-11-16 18:48:06.942630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:23.634 [2024-11-16 18:48:06.942753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.634 [2024-11-16 18:48:06.942794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:23.634 [2024-11-16 18:48:06.942824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.634 [2024-11-16 18:48:06.945055] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.634 [2024-11-16 18:48:06.945155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:23.634 pt2 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.634 [2024-11-16 18:48:06.954686] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:23.634 [2024-11-16 18:48:06.956670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:23.634 [2024-11-16 18:48:06.956889] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:23.634 [2024-11-16 18:48:06.956941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:23.634 [2024-11-16 18:48:06.957221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:23.634 [2024-11-16 18:48:06.957424] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:23.634 [2024-11-16 18:48:06.957471] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:23.634 [2024-11-16 18:48:06.957679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.634 18:48:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.634 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:23.634 "name": "raid_bdev1", 00:07:23.634 "uuid": "16e4418a-3014-4ff4-8d11-74fd8dc42662", 00:07:23.634 "strip_size_kb": 64, 00:07:23.634 "state": "online", 00:07:23.634 "raid_level": "concat", 00:07:23.634 "superblock": true, 00:07:23.634 "num_base_bdevs": 2, 00:07:23.634 "num_base_bdevs_discovered": 2, 00:07:23.634 "num_base_bdevs_operational": 2, 00:07:23.634 "base_bdevs_list": [ 00:07:23.634 { 00:07:23.634 "name": "pt1", 00:07:23.634 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:23.634 "is_configured": true, 00:07:23.634 "data_offset": 2048, 00:07:23.634 "data_size": 63488 00:07:23.634 }, 00:07:23.634 { 00:07:23.634 "name": "pt2", 00:07:23.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:23.634 "is_configured": true, 00:07:23.634 "data_offset": 2048, 00:07:23.634 "data_size": 63488 00:07:23.634 } 00:07:23.634 ] 00:07:23.634 }' 00:07:23.634 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:23.634 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.894 [2024-11-16 18:48:07.334269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:23.894 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.193 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:24.193 "name": "raid_bdev1", 00:07:24.193 "aliases": [ 00:07:24.193 "16e4418a-3014-4ff4-8d11-74fd8dc42662" 00:07:24.193 ], 00:07:24.193 "product_name": "Raid Volume", 00:07:24.193 "block_size": 512, 00:07:24.193 "num_blocks": 126976, 00:07:24.193 "uuid": "16e4418a-3014-4ff4-8d11-74fd8dc42662", 00:07:24.193 "assigned_rate_limits": { 00:07:24.193 "rw_ios_per_sec": 0, 00:07:24.193 "rw_mbytes_per_sec": 0, 00:07:24.193 "r_mbytes_per_sec": 0, 00:07:24.193 "w_mbytes_per_sec": 0 00:07:24.193 }, 00:07:24.193 "claimed": false, 00:07:24.193 "zoned": false, 00:07:24.193 "supported_io_types": { 00:07:24.193 "read": true, 00:07:24.193 "write": true, 00:07:24.193 "unmap": true, 00:07:24.194 "flush": true, 00:07:24.194 "reset": true, 00:07:24.194 "nvme_admin": false, 00:07:24.194 "nvme_io": false, 00:07:24.194 "nvme_io_md": false, 00:07:24.194 "write_zeroes": true, 00:07:24.194 "zcopy": false, 00:07:24.194 "get_zone_info": false, 00:07:24.194 "zone_management": false, 00:07:24.194 "zone_append": false, 00:07:24.194 "compare": false, 00:07:24.194 "compare_and_write": false, 00:07:24.194 "abort": false, 00:07:24.194 "seek_hole": false, 00:07:24.194 "seek_data": false, 00:07:24.194 "copy": false, 00:07:24.194 "nvme_iov_md": false 00:07:24.194 }, 00:07:24.194 "memory_domains": [ 00:07:24.194 { 00:07:24.194 "dma_device_id": "system", 00:07:24.194 "dma_device_type": 1 00:07:24.194 }, 00:07:24.194 { 00:07:24.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.194 "dma_device_type": 2 00:07:24.194 }, 00:07:24.194 { 00:07:24.194 "dma_device_id": "system", 00:07:24.194 "dma_device_type": 1 00:07:24.194 }, 00:07:24.194 { 00:07:24.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:24.194 "dma_device_type": 2 00:07:24.194 } 00:07:24.194 ], 00:07:24.194 "driver_specific": { 00:07:24.194 "raid": { 00:07:24.194 "uuid": "16e4418a-3014-4ff4-8d11-74fd8dc42662", 00:07:24.194 "strip_size_kb": 64, 00:07:24.194 "state": "online", 00:07:24.194 "raid_level": "concat", 00:07:24.194 "superblock": true, 00:07:24.194 "num_base_bdevs": 2, 00:07:24.194 "num_base_bdevs_discovered": 2, 00:07:24.194 "num_base_bdevs_operational": 2, 00:07:24.194 "base_bdevs_list": [ 00:07:24.194 { 00:07:24.194 "name": "pt1", 00:07:24.194 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.194 "is_configured": true, 00:07:24.194 "data_offset": 2048, 00:07:24.194 "data_size": 63488 00:07:24.194 }, 00:07:24.194 { 00:07:24.194 "name": "pt2", 00:07:24.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.194 "is_configured": true, 00:07:24.194 "data_offset": 2048, 00:07:24.194 "data_size": 63488 00:07:24.194 } 00:07:24.194 ] 00:07:24.194 } 00:07:24.194 } 00:07:24.194 }' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:24.194 pt2' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:24.194 [2024-11-16 18:48:07.553871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=16e4418a-3014-4ff4-8d11-74fd8dc42662 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 16e4418a-3014-4ff4-8d11-74fd8dc42662 ']' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.194 [2024-11-16 18:48:07.601498] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:24.194 [2024-11-16 18:48:07.601562] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:24.194 [2024-11-16 18:48:07.601665] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:24.194 [2024-11-16 18:48:07.601737] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:24.194 [2024-11-16 18:48:07.601785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.194 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.468 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.468 [2024-11-16 18:48:07.725312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:24.468 [2024-11-16 18:48:07.727124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:24.469 [2024-11-16 18:48:07.727189] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:24.469 [2024-11-16 18:48:07.727240] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:24.469 [2024-11-16 18:48:07.727254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:24.469 [2024-11-16 18:48:07.727264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:24.469 request: 00:07:24.469 { 00:07:24.469 "name": "raid_bdev1", 00:07:24.469 "raid_level": "concat", 00:07:24.469 "base_bdevs": [ 00:07:24.469 "malloc1", 00:07:24.469 "malloc2" 00:07:24.469 ], 00:07:24.469 "strip_size_kb": 64, 00:07:24.469 "superblock": false, 00:07:24.469 "method": "bdev_raid_create", 00:07:24.469 "req_id": 1 00:07:24.469 } 00:07:24.469 Got JSON-RPC error response 00:07:24.469 response: 00:07:24.469 { 00:07:24.469 "code": -17, 00:07:24.469 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:24.469 } 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.469 [2024-11-16 18:48:07.793173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:24.469 [2024-11-16 18:48:07.793221] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.469 [2024-11-16 18:48:07.793238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:24.469 [2024-11-16 18:48:07.793249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.469 [2024-11-16 18:48:07.795351] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.469 [2024-11-16 18:48:07.795387] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:24.469 [2024-11-16 18:48:07.795457] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:24.469 [2024-11-16 18:48:07.795511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:24.469 pt1 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.469 "name": "raid_bdev1", 00:07:24.469 "uuid": "16e4418a-3014-4ff4-8d11-74fd8dc42662", 00:07:24.469 "strip_size_kb": 64, 00:07:24.469 "state": "configuring", 00:07:24.469 "raid_level": "concat", 00:07:24.469 "superblock": true, 00:07:24.469 "num_base_bdevs": 2, 00:07:24.469 "num_base_bdevs_discovered": 1, 00:07:24.469 "num_base_bdevs_operational": 2, 00:07:24.469 "base_bdevs_list": [ 00:07:24.469 { 00:07:24.469 "name": "pt1", 00:07:24.469 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.469 "is_configured": true, 00:07:24.469 "data_offset": 2048, 00:07:24.469 "data_size": 63488 00:07:24.469 }, 00:07:24.469 { 00:07:24.469 "name": null, 00:07:24.469 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.469 "is_configured": false, 00:07:24.469 "data_offset": 2048, 00:07:24.469 "data_size": 63488 00:07:24.469 } 00:07:24.469 ] 00:07:24.469 }' 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.469 18:48:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.730 [2024-11-16 18:48:08.180525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:24.730 [2024-11-16 18:48:08.180635] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.730 [2024-11-16 18:48:08.180686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:24.730 [2024-11-16 18:48:08.180733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.730 [2024-11-16 18:48:08.181196] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.730 [2024-11-16 18:48:08.181257] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:24.730 [2024-11-16 18:48:08.181360] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:24.730 [2024-11-16 18:48:08.181412] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:24.730 [2024-11-16 18:48:08.181565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:24.730 [2024-11-16 18:48:08.181605] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.730 [2024-11-16 18:48:08.181866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:24.730 [2024-11-16 18:48:08.182044] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:24.730 [2024-11-16 18:48:08.182085] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:24.730 [2024-11-16 18:48:08.182247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.730 pt2 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.730 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.990 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.990 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:24.990 "name": "raid_bdev1", 00:07:24.990 "uuid": "16e4418a-3014-4ff4-8d11-74fd8dc42662", 00:07:24.990 "strip_size_kb": 64, 00:07:24.990 "state": "online", 00:07:24.990 "raid_level": "concat", 00:07:24.990 "superblock": true, 00:07:24.990 "num_base_bdevs": 2, 00:07:24.990 "num_base_bdevs_discovered": 2, 00:07:24.990 "num_base_bdevs_operational": 2, 00:07:24.990 "base_bdevs_list": [ 00:07:24.990 { 00:07:24.990 "name": "pt1", 00:07:24.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:24.990 "is_configured": true, 00:07:24.990 "data_offset": 2048, 00:07:24.990 "data_size": 63488 00:07:24.990 }, 00:07:24.990 { 00:07:24.990 "name": "pt2", 00:07:24.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:24.990 "is_configured": true, 00:07:24.990 "data_offset": 2048, 00:07:24.990 "data_size": 63488 00:07:24.990 } 00:07:24.990 ] 00:07:24.990 }' 00:07:24.990 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:24.990 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.250 [2024-11-16 18:48:08.639974] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:25.250 "name": "raid_bdev1", 00:07:25.250 "aliases": [ 00:07:25.250 "16e4418a-3014-4ff4-8d11-74fd8dc42662" 00:07:25.250 ], 00:07:25.250 "product_name": "Raid Volume", 00:07:25.250 "block_size": 512, 00:07:25.250 "num_blocks": 126976, 00:07:25.250 "uuid": "16e4418a-3014-4ff4-8d11-74fd8dc42662", 00:07:25.250 "assigned_rate_limits": { 00:07:25.250 "rw_ios_per_sec": 0, 00:07:25.250 "rw_mbytes_per_sec": 0, 00:07:25.250 "r_mbytes_per_sec": 0, 00:07:25.250 "w_mbytes_per_sec": 0 00:07:25.250 }, 00:07:25.250 "claimed": false, 00:07:25.250 "zoned": false, 00:07:25.250 "supported_io_types": { 00:07:25.250 "read": true, 00:07:25.250 "write": true, 00:07:25.250 "unmap": true, 00:07:25.250 "flush": true, 00:07:25.250 "reset": true, 00:07:25.250 "nvme_admin": false, 00:07:25.250 "nvme_io": false, 00:07:25.250 "nvme_io_md": false, 00:07:25.250 "write_zeroes": true, 00:07:25.250 "zcopy": false, 00:07:25.250 "get_zone_info": false, 00:07:25.250 "zone_management": false, 00:07:25.250 "zone_append": false, 00:07:25.250 "compare": false, 00:07:25.250 "compare_and_write": false, 00:07:25.250 "abort": false, 00:07:25.250 "seek_hole": false, 00:07:25.250 "seek_data": false, 00:07:25.250 "copy": false, 00:07:25.250 "nvme_iov_md": false 00:07:25.250 }, 00:07:25.250 "memory_domains": [ 00:07:25.250 { 00:07:25.250 "dma_device_id": "system", 00:07:25.250 "dma_device_type": 1 00:07:25.250 }, 00:07:25.250 { 00:07:25.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.250 "dma_device_type": 2 00:07:25.250 }, 00:07:25.250 { 00:07:25.250 "dma_device_id": "system", 00:07:25.250 "dma_device_type": 1 00:07:25.250 }, 00:07:25.250 { 00:07:25.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.250 "dma_device_type": 2 00:07:25.250 } 00:07:25.250 ], 00:07:25.250 "driver_specific": { 00:07:25.250 "raid": { 00:07:25.250 "uuid": "16e4418a-3014-4ff4-8d11-74fd8dc42662", 00:07:25.250 "strip_size_kb": 64, 00:07:25.250 "state": "online", 00:07:25.250 "raid_level": "concat", 00:07:25.250 "superblock": true, 00:07:25.250 "num_base_bdevs": 2, 00:07:25.250 "num_base_bdevs_discovered": 2, 00:07:25.250 "num_base_bdevs_operational": 2, 00:07:25.250 "base_bdevs_list": [ 00:07:25.250 { 00:07:25.250 "name": "pt1", 00:07:25.250 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:25.250 "is_configured": true, 00:07:25.250 "data_offset": 2048, 00:07:25.250 "data_size": 63488 00:07:25.250 }, 00:07:25.250 { 00:07:25.250 "name": "pt2", 00:07:25.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:25.250 "is_configured": true, 00:07:25.250 "data_offset": 2048, 00:07:25.250 "data_size": 63488 00:07:25.250 } 00:07:25.250 ] 00:07:25.250 } 00:07:25.250 } 00:07:25.250 }' 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:25.250 pt2' 00:07:25.250 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.510 [2024-11-16 18:48:08.847578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 16e4418a-3014-4ff4-8d11-74fd8dc42662 '!=' 16e4418a-3014-4ff4-8d11-74fd8dc42662 ']' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62077 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62077 ']' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62077 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62077 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62077' 00:07:25.510 killing process with pid 62077 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62077 00:07:25.510 [2024-11-16 18:48:08.920207] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.510 [2024-11-16 18:48:08.920333] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.510 18:48:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62077 00:07:25.510 [2024-11-16 18:48:08.920415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.510 [2024-11-16 18:48:08.920457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:25.769 [2024-11-16 18:48:09.112356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.709 18:48:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:26.709 ************************************ 00:07:26.709 END TEST raid_superblock_test 00:07:26.709 ************************************ 00:07:26.709 00:07:26.709 real 0m4.226s 00:07:26.709 user 0m5.920s 00:07:26.709 sys 0m0.711s 00:07:26.709 18:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.709 18:48:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.969 18:48:10 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:07:26.969 18:48:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:26.969 18:48:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.969 18:48:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.969 ************************************ 00:07:26.969 START TEST raid_read_error_test 00:07:26.969 ************************************ 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9BwpMVsQJ0 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62283 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62283 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62283 ']' 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.969 18:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.970 18:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.970 18:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.970 18:48:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:26.970 [2024-11-16 18:48:10.332428] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:26.970 [2024-11-16 18:48:10.332641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62283 ] 00:07:27.230 [2024-11-16 18:48:10.505446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.230 [2024-11-16 18:48:10.607948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.489 [2024-11-16 18:48:10.788518] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.489 [2024-11-16 18:48:10.788555] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.750 BaseBdev1_malloc 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.750 true 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.750 [2024-11-16 18:48:11.209185] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:27.750 [2024-11-16 18:48:11.209294] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.750 [2024-11-16 18:48:11.209329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:27.750 [2024-11-16 18:48:11.209362] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.750 [2024-11-16 18:48:11.211364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.750 [2024-11-16 18:48:11.211449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:27.750 BaseBdev1 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.750 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.010 BaseBdev2_malloc 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.010 true 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.010 [2024-11-16 18:48:11.275106] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:28.010 [2024-11-16 18:48:11.275154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:28.010 [2024-11-16 18:48:11.275169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:28.010 [2024-11-16 18:48:11.275179] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:28.010 [2024-11-16 18:48:11.277184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:28.010 [2024-11-16 18:48:11.277224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:28.010 BaseBdev2 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.010 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.011 [2024-11-16 18:48:11.287153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:28.011 [2024-11-16 18:48:11.288935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:28.011 [2024-11-16 18:48:11.289124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:28.011 [2024-11-16 18:48:11.289138] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:28.011 [2024-11-16 18:48:11.289351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:28.011 [2024-11-16 18:48:11.289510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:28.011 [2024-11-16 18:48:11.289521] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:28.011 [2024-11-16 18:48:11.289651] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:28.011 "name": "raid_bdev1", 00:07:28.011 "uuid": "5de00e83-4f5a-497c-a3f5-2e5ef5e8708b", 00:07:28.011 "strip_size_kb": 64, 00:07:28.011 "state": "online", 00:07:28.011 "raid_level": "concat", 00:07:28.011 "superblock": true, 00:07:28.011 "num_base_bdevs": 2, 00:07:28.011 "num_base_bdevs_discovered": 2, 00:07:28.011 "num_base_bdevs_operational": 2, 00:07:28.011 "base_bdevs_list": [ 00:07:28.011 { 00:07:28.011 "name": "BaseBdev1", 00:07:28.011 "uuid": "41eec094-0c86-54e5-a693-e6866e988f29", 00:07:28.011 "is_configured": true, 00:07:28.011 "data_offset": 2048, 00:07:28.011 "data_size": 63488 00:07:28.011 }, 00:07:28.011 { 00:07:28.011 "name": "BaseBdev2", 00:07:28.011 "uuid": "0ba2c38a-9629-5cec-8f45-8a85cae619dc", 00:07:28.011 "is_configured": true, 00:07:28.011 "data_offset": 2048, 00:07:28.011 "data_size": 63488 00:07:28.011 } 00:07:28.011 ] 00:07:28.011 }' 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:28.011 18:48:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:28.271 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:28.271 18:48:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:28.530 [2024-11-16 18:48:11.759676] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:29.555 "name": "raid_bdev1", 00:07:29.555 "uuid": "5de00e83-4f5a-497c-a3f5-2e5ef5e8708b", 00:07:29.555 "strip_size_kb": 64, 00:07:29.555 "state": "online", 00:07:29.555 "raid_level": "concat", 00:07:29.555 "superblock": true, 00:07:29.555 "num_base_bdevs": 2, 00:07:29.555 "num_base_bdevs_discovered": 2, 00:07:29.555 "num_base_bdevs_operational": 2, 00:07:29.555 "base_bdevs_list": [ 00:07:29.555 { 00:07:29.555 "name": "BaseBdev1", 00:07:29.555 "uuid": "41eec094-0c86-54e5-a693-e6866e988f29", 00:07:29.555 "is_configured": true, 00:07:29.555 "data_offset": 2048, 00:07:29.555 "data_size": 63488 00:07:29.555 }, 00:07:29.555 { 00:07:29.555 "name": "BaseBdev2", 00:07:29.555 "uuid": "0ba2c38a-9629-5cec-8f45-8a85cae619dc", 00:07:29.555 "is_configured": true, 00:07:29.555 "data_offset": 2048, 00:07:29.555 "data_size": 63488 00:07:29.555 } 00:07:29.555 ] 00:07:29.555 }' 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:29.555 18:48:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:29.815 [2024-11-16 18:48:13.113947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:29.815 [2024-11-16 18:48:13.114050] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.815 [2024-11-16 18:48:13.116800] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.815 [2024-11-16 18:48:13.116885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.815 [2024-11-16 18:48:13.116935] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.815 [2024-11-16 18:48:13.116979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:29.815 { 00:07:29.815 "results": [ 00:07:29.815 { 00:07:29.815 "job": "raid_bdev1", 00:07:29.815 "core_mask": "0x1", 00:07:29.815 "workload": "randrw", 00:07:29.815 "percentage": 50, 00:07:29.815 "status": "finished", 00:07:29.815 "queue_depth": 1, 00:07:29.815 "io_size": 131072, 00:07:29.815 "runtime": 1.355247, 00:07:29.815 "iops": 17293.526567481797, 00:07:29.815 "mibps": 2161.6908209352246, 00:07:29.815 "io_failed": 1, 00:07:29.815 "io_timeout": 0, 00:07:29.815 "avg_latency_us": 80.23557280734342, 00:07:29.815 "min_latency_us": 24.482096069868994, 00:07:29.815 "max_latency_us": 1373.6803493449781 00:07:29.815 } 00:07:29.815 ], 00:07:29.815 "core_count": 1 00:07:29.815 } 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62283 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62283 ']' 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62283 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62283 00:07:29.815 killing process with pid 62283 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62283' 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62283 00:07:29.815 18:48:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62283 00:07:29.815 [2024-11-16 18:48:13.162348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.074 [2024-11-16 18:48:13.286872] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9BwpMVsQJ0 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:31.014 ************************************ 00:07:31.014 END TEST raid_read_error_test 00:07:31.014 ************************************ 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:07:31.014 00:07:31.014 real 0m4.151s 00:07:31.014 user 0m4.938s 00:07:31.014 sys 0m0.508s 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.014 18:48:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.014 18:48:14 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:07:31.014 18:48:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:31.014 18:48:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.014 18:48:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.014 ************************************ 00:07:31.014 START TEST raid_write_error_test 00:07:31.014 ************************************ 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.xH5cY6fsHg 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62423 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62423 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62423 ']' 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.014 18:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.015 18:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.015 18:48:14 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.274 [2024-11-16 18:48:14.558000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:31.274 [2024-11-16 18:48:14.558122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62423 ] 00:07:31.274 [2024-11-16 18:48:14.729942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.533 [2024-11-16 18:48:14.844861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.792 [2024-11-16 18:48:15.041478] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.792 [2024-11-16 18:48:15.041517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.052 BaseBdev1_malloc 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.052 true 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.052 [2024-11-16 18:48:15.435212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:32.052 [2024-11-16 18:48:15.435265] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.052 [2024-11-16 18:48:15.435284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:32.052 [2024-11-16 18:48:15.435293] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.052 [2024-11-16 18:48:15.437287] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.052 [2024-11-16 18:48:15.437394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:32.052 BaseBdev1 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.052 BaseBdev2_malloc 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.052 true 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.052 [2024-11-16 18:48:15.499099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:32.052 [2024-11-16 18:48:15.499147] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.052 [2024-11-16 18:48:15.499163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:32.052 [2024-11-16 18:48:15.499172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.052 [2024-11-16 18:48:15.501165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.052 [2024-11-16 18:48:15.501204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:32.052 BaseBdev2 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.052 [2024-11-16 18:48:15.511134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:32.052 [2024-11-16 18:48:15.512885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:32.052 [2024-11-16 18:48:15.513074] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:32.052 [2024-11-16 18:48:15.513090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:32.052 [2024-11-16 18:48:15.513319] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:32.052 [2024-11-16 18:48:15.513472] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:32.052 [2024-11-16 18:48:15.513483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:32.052 [2024-11-16 18:48:15.513605] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:32.052 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:32.316 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:32.316 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.316 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.316 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.316 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.316 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:32.316 "name": "raid_bdev1", 00:07:32.316 "uuid": "7ba866c1-f58a-4bdb-821a-bb27f5bb4834", 00:07:32.316 "strip_size_kb": 64, 00:07:32.316 "state": "online", 00:07:32.316 "raid_level": "concat", 00:07:32.316 "superblock": true, 00:07:32.316 "num_base_bdevs": 2, 00:07:32.316 "num_base_bdevs_discovered": 2, 00:07:32.316 "num_base_bdevs_operational": 2, 00:07:32.316 "base_bdevs_list": [ 00:07:32.316 { 00:07:32.316 "name": "BaseBdev1", 00:07:32.316 "uuid": "1bffb780-8784-5ad5-af4c-c6788b1317c2", 00:07:32.316 "is_configured": true, 00:07:32.316 "data_offset": 2048, 00:07:32.316 "data_size": 63488 00:07:32.316 }, 00:07:32.316 { 00:07:32.316 "name": "BaseBdev2", 00:07:32.316 "uuid": "97665f27-5277-54f6-b7d1-e1f20d88b771", 00:07:32.316 "is_configured": true, 00:07:32.316 "data_offset": 2048, 00:07:32.316 "data_size": 63488 00:07:32.316 } 00:07:32.316 ] 00:07:32.316 }' 00:07:32.316 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:32.316 18:48:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.575 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:32.575 18:48:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:32.834 [2024-11-16 18:48:16.071497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:33.771 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:33.771 18:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.771 18:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.771 18:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.771 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.772 18:48:16 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.772 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.772 18:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:33.772 "name": "raid_bdev1", 00:07:33.772 "uuid": "7ba866c1-f58a-4bdb-821a-bb27f5bb4834", 00:07:33.772 "strip_size_kb": 64, 00:07:33.772 "state": "online", 00:07:33.772 "raid_level": "concat", 00:07:33.772 "superblock": true, 00:07:33.772 "num_base_bdevs": 2, 00:07:33.772 "num_base_bdevs_discovered": 2, 00:07:33.772 "num_base_bdevs_operational": 2, 00:07:33.772 "base_bdevs_list": [ 00:07:33.772 { 00:07:33.772 "name": "BaseBdev1", 00:07:33.772 "uuid": "1bffb780-8784-5ad5-af4c-c6788b1317c2", 00:07:33.772 "is_configured": true, 00:07:33.772 "data_offset": 2048, 00:07:33.772 "data_size": 63488 00:07:33.772 }, 00:07:33.772 { 00:07:33.772 "name": "BaseBdev2", 00:07:33.772 "uuid": "97665f27-5277-54f6-b7d1-e1f20d88b771", 00:07:33.772 "is_configured": true, 00:07:33.772 "data_offset": 2048, 00:07:33.772 "data_size": 63488 00:07:33.772 } 00:07:33.772 ] 00:07:33.772 }' 00:07:33.772 18:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:33.772 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.032 [2024-11-16 18:48:17.409349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:34.032 [2024-11-16 18:48:17.409429] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:34.032 [2024-11-16 18:48:17.412174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:34.032 [2024-11-16 18:48:17.412259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:34.032 [2024-11-16 18:48:17.412312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:34.032 [2024-11-16 18:48:17.412358] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:34.032 { 00:07:34.032 "results": [ 00:07:34.032 { 00:07:34.032 "job": "raid_bdev1", 00:07:34.032 "core_mask": "0x1", 00:07:34.032 "workload": "randrw", 00:07:34.032 "percentage": 50, 00:07:34.032 "status": "finished", 00:07:34.032 "queue_depth": 1, 00:07:34.032 "io_size": 131072, 00:07:34.032 "runtime": 1.338791, 00:07:34.032 "iops": 17475.468538405174, 00:07:34.032 "mibps": 2184.4335673006467, 00:07:34.032 "io_failed": 1, 00:07:34.032 "io_timeout": 0, 00:07:34.032 "avg_latency_us": 79.37546488716781, 00:07:34.032 "min_latency_us": 24.258515283842794, 00:07:34.032 "max_latency_us": 1409.4532751091704 00:07:34.032 } 00:07:34.032 ], 00:07:34.032 "core_count": 1 00:07:34.032 } 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62423 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62423 ']' 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62423 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62423 00:07:34.032 killing process with pid 62423 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62423' 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62423 00:07:34.032 18:48:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62423 00:07:34.032 [2024-11-16 18:48:17.456285] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:34.291 [2024-11-16 18:48:17.581644] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.xH5cY6fsHg 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:07:35.229 00:07:35.229 real 0m4.222s 00:07:35.229 user 0m5.069s 00:07:35.229 sys 0m0.521s 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.229 18:48:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.229 ************************************ 00:07:35.229 END TEST raid_write_error_test 00:07:35.229 ************************************ 00:07:35.489 18:48:18 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:35.489 18:48:18 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:07:35.489 18:48:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:35.489 18:48:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.489 18:48:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:35.489 ************************************ 00:07:35.489 START TEST raid_state_function_test 00:07:35.489 ************************************ 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:35.489 Process raid pid: 62561 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62561 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62561' 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62561 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62561 ']' 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.489 18:48:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.489 [2024-11-16 18:48:18.842343] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:35.489 [2024-11-16 18:48:18.842547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.765 [2024-11-16 18:48:19.014623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.765 [2024-11-16 18:48:19.122801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.025 [2024-11-16 18:48:19.319903] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.025 [2024-11-16 18:48:19.320021] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.286 [2024-11-16 18:48:19.669974] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.286 [2024-11-16 18:48:19.670023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.286 [2024-11-16 18:48:19.670033] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.286 [2024-11-16 18:48:19.670042] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.286 "name": "Existed_Raid", 00:07:36.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.286 "strip_size_kb": 0, 00:07:36.286 "state": "configuring", 00:07:36.286 "raid_level": "raid1", 00:07:36.286 "superblock": false, 00:07:36.286 "num_base_bdevs": 2, 00:07:36.286 "num_base_bdevs_discovered": 0, 00:07:36.286 "num_base_bdevs_operational": 2, 00:07:36.286 "base_bdevs_list": [ 00:07:36.286 { 00:07:36.286 "name": "BaseBdev1", 00:07:36.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.286 "is_configured": false, 00:07:36.286 "data_offset": 0, 00:07:36.286 "data_size": 0 00:07:36.286 }, 00:07:36.286 { 00:07:36.286 "name": "BaseBdev2", 00:07:36.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.286 "is_configured": false, 00:07:36.286 "data_offset": 0, 00:07:36.286 "data_size": 0 00:07:36.286 } 00:07:36.286 ] 00:07:36.286 }' 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.286 18:48:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.854 [2024-11-16 18:48:20.105203] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.854 [2024-11-16 18:48:20.105290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.854 [2024-11-16 18:48:20.117143] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.854 [2024-11-16 18:48:20.117220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.854 [2024-11-16 18:48:20.117247] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.854 [2024-11-16 18:48:20.117272] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.854 [2024-11-16 18:48:20.165815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.854 BaseBdev1 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.854 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.855 [ 00:07:36.855 { 00:07:36.855 "name": "BaseBdev1", 00:07:36.855 "aliases": [ 00:07:36.855 "7c42d308-326c-43f8-9940-6ecd65804f3f" 00:07:36.855 ], 00:07:36.855 "product_name": "Malloc disk", 00:07:36.855 "block_size": 512, 00:07:36.855 "num_blocks": 65536, 00:07:36.855 "uuid": "7c42d308-326c-43f8-9940-6ecd65804f3f", 00:07:36.855 "assigned_rate_limits": { 00:07:36.855 "rw_ios_per_sec": 0, 00:07:36.855 "rw_mbytes_per_sec": 0, 00:07:36.855 "r_mbytes_per_sec": 0, 00:07:36.855 "w_mbytes_per_sec": 0 00:07:36.855 }, 00:07:36.855 "claimed": true, 00:07:36.855 "claim_type": "exclusive_write", 00:07:36.855 "zoned": false, 00:07:36.855 "supported_io_types": { 00:07:36.855 "read": true, 00:07:36.855 "write": true, 00:07:36.855 "unmap": true, 00:07:36.855 "flush": true, 00:07:36.855 "reset": true, 00:07:36.855 "nvme_admin": false, 00:07:36.855 "nvme_io": false, 00:07:36.855 "nvme_io_md": false, 00:07:36.855 "write_zeroes": true, 00:07:36.855 "zcopy": true, 00:07:36.855 "get_zone_info": false, 00:07:36.855 "zone_management": false, 00:07:36.855 "zone_append": false, 00:07:36.855 "compare": false, 00:07:36.855 "compare_and_write": false, 00:07:36.855 "abort": true, 00:07:36.855 "seek_hole": false, 00:07:36.855 "seek_data": false, 00:07:36.855 "copy": true, 00:07:36.855 "nvme_iov_md": false 00:07:36.855 }, 00:07:36.855 "memory_domains": [ 00:07:36.855 { 00:07:36.855 "dma_device_id": "system", 00:07:36.855 "dma_device_type": 1 00:07:36.855 }, 00:07:36.855 { 00:07:36.855 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.855 "dma_device_type": 2 00:07:36.855 } 00:07:36.855 ], 00:07:36.855 "driver_specific": {} 00:07:36.855 } 00:07:36.855 ] 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.855 "name": "Existed_Raid", 00:07:36.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.855 "strip_size_kb": 0, 00:07:36.855 "state": "configuring", 00:07:36.855 "raid_level": "raid1", 00:07:36.855 "superblock": false, 00:07:36.855 "num_base_bdevs": 2, 00:07:36.855 "num_base_bdevs_discovered": 1, 00:07:36.855 "num_base_bdevs_operational": 2, 00:07:36.855 "base_bdevs_list": [ 00:07:36.855 { 00:07:36.855 "name": "BaseBdev1", 00:07:36.855 "uuid": "7c42d308-326c-43f8-9940-6ecd65804f3f", 00:07:36.855 "is_configured": true, 00:07:36.855 "data_offset": 0, 00:07:36.855 "data_size": 65536 00:07:36.855 }, 00:07:36.855 { 00:07:36.855 "name": "BaseBdev2", 00:07:36.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.855 "is_configured": false, 00:07:36.855 "data_offset": 0, 00:07:36.855 "data_size": 0 00:07:36.855 } 00:07:36.855 ] 00:07:36.855 }' 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.855 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.423 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.423 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.424 [2024-11-16 18:48:20.644993] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.424 [2024-11-16 18:48:20.645075] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.424 [2024-11-16 18:48:20.653019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.424 [2024-11-16 18:48:20.654827] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.424 [2024-11-16 18:48:20.654898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.424 "name": "Existed_Raid", 00:07:37.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.424 "strip_size_kb": 0, 00:07:37.424 "state": "configuring", 00:07:37.424 "raid_level": "raid1", 00:07:37.424 "superblock": false, 00:07:37.424 "num_base_bdevs": 2, 00:07:37.424 "num_base_bdevs_discovered": 1, 00:07:37.424 "num_base_bdevs_operational": 2, 00:07:37.424 "base_bdevs_list": [ 00:07:37.424 { 00:07:37.424 "name": "BaseBdev1", 00:07:37.424 "uuid": "7c42d308-326c-43f8-9940-6ecd65804f3f", 00:07:37.424 "is_configured": true, 00:07:37.424 "data_offset": 0, 00:07:37.424 "data_size": 65536 00:07:37.424 }, 00:07:37.424 { 00:07:37.424 "name": "BaseBdev2", 00:07:37.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.424 "is_configured": false, 00:07:37.424 "data_offset": 0, 00:07:37.424 "data_size": 0 00:07:37.424 } 00:07:37.424 ] 00:07:37.424 }' 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.424 18:48:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.684 [2024-11-16 18:48:21.115684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.684 [2024-11-16 18:48:21.115792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.684 [2024-11-16 18:48:21.115804] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:37.684 [2024-11-16 18:48:21.116096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.684 [2024-11-16 18:48:21.116265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.684 [2024-11-16 18:48:21.116279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:37.684 [2024-11-16 18:48:21.116530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.684 BaseBdev2 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.684 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.684 [ 00:07:37.684 { 00:07:37.684 "name": "BaseBdev2", 00:07:37.684 "aliases": [ 00:07:37.684 "0223f2c2-cc9f-4613-8298-4d6f4702af8d" 00:07:37.684 ], 00:07:37.684 "product_name": "Malloc disk", 00:07:37.684 "block_size": 512, 00:07:37.684 "num_blocks": 65536, 00:07:37.684 "uuid": "0223f2c2-cc9f-4613-8298-4d6f4702af8d", 00:07:37.684 "assigned_rate_limits": { 00:07:37.684 "rw_ios_per_sec": 0, 00:07:37.684 "rw_mbytes_per_sec": 0, 00:07:37.684 "r_mbytes_per_sec": 0, 00:07:37.684 "w_mbytes_per_sec": 0 00:07:37.684 }, 00:07:37.684 "claimed": true, 00:07:37.684 "claim_type": "exclusive_write", 00:07:37.684 "zoned": false, 00:07:37.684 "supported_io_types": { 00:07:37.684 "read": true, 00:07:37.684 "write": true, 00:07:37.684 "unmap": true, 00:07:37.684 "flush": true, 00:07:37.684 "reset": true, 00:07:37.684 "nvme_admin": false, 00:07:37.684 "nvme_io": false, 00:07:37.684 "nvme_io_md": false, 00:07:37.684 "write_zeroes": true, 00:07:37.684 "zcopy": true, 00:07:37.684 "get_zone_info": false, 00:07:37.684 "zone_management": false, 00:07:37.684 "zone_append": false, 00:07:37.684 "compare": false, 00:07:37.684 "compare_and_write": false, 00:07:37.684 "abort": true, 00:07:37.684 "seek_hole": false, 00:07:37.684 "seek_data": false, 00:07:37.684 "copy": true, 00:07:37.684 "nvme_iov_md": false 00:07:37.684 }, 00:07:37.684 "memory_domains": [ 00:07:37.684 { 00:07:37.684 "dma_device_id": "system", 00:07:37.684 "dma_device_type": 1 00:07:37.684 }, 00:07:37.684 { 00:07:37.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.944 "dma_device_type": 2 00:07:37.944 } 00:07:37.944 ], 00:07:37.944 "driver_specific": {} 00:07:37.944 } 00:07:37.944 ] 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.944 "name": "Existed_Raid", 00:07:37.944 "uuid": "15e9a4ff-828b-4f48-b56d-e88905bf8817", 00:07:37.944 "strip_size_kb": 0, 00:07:37.944 "state": "online", 00:07:37.944 "raid_level": "raid1", 00:07:37.944 "superblock": false, 00:07:37.944 "num_base_bdevs": 2, 00:07:37.944 "num_base_bdevs_discovered": 2, 00:07:37.944 "num_base_bdevs_operational": 2, 00:07:37.944 "base_bdevs_list": [ 00:07:37.944 { 00:07:37.944 "name": "BaseBdev1", 00:07:37.944 "uuid": "7c42d308-326c-43f8-9940-6ecd65804f3f", 00:07:37.944 "is_configured": true, 00:07:37.944 "data_offset": 0, 00:07:37.944 "data_size": 65536 00:07:37.944 }, 00:07:37.944 { 00:07:37.944 "name": "BaseBdev2", 00:07:37.944 "uuid": "0223f2c2-cc9f-4613-8298-4d6f4702af8d", 00:07:37.944 "is_configured": true, 00:07:37.944 "data_offset": 0, 00:07:37.944 "data_size": 65536 00:07:37.944 } 00:07:37.944 ] 00:07:37.944 }' 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.944 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.203 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:38.203 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:38.203 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:38.203 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:38.203 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:38.203 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:38.203 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:38.204 [2024-11-16 18:48:21.515219] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:38.204 "name": "Existed_Raid", 00:07:38.204 "aliases": [ 00:07:38.204 "15e9a4ff-828b-4f48-b56d-e88905bf8817" 00:07:38.204 ], 00:07:38.204 "product_name": "Raid Volume", 00:07:38.204 "block_size": 512, 00:07:38.204 "num_blocks": 65536, 00:07:38.204 "uuid": "15e9a4ff-828b-4f48-b56d-e88905bf8817", 00:07:38.204 "assigned_rate_limits": { 00:07:38.204 "rw_ios_per_sec": 0, 00:07:38.204 "rw_mbytes_per_sec": 0, 00:07:38.204 "r_mbytes_per_sec": 0, 00:07:38.204 "w_mbytes_per_sec": 0 00:07:38.204 }, 00:07:38.204 "claimed": false, 00:07:38.204 "zoned": false, 00:07:38.204 "supported_io_types": { 00:07:38.204 "read": true, 00:07:38.204 "write": true, 00:07:38.204 "unmap": false, 00:07:38.204 "flush": false, 00:07:38.204 "reset": true, 00:07:38.204 "nvme_admin": false, 00:07:38.204 "nvme_io": false, 00:07:38.204 "nvme_io_md": false, 00:07:38.204 "write_zeroes": true, 00:07:38.204 "zcopy": false, 00:07:38.204 "get_zone_info": false, 00:07:38.204 "zone_management": false, 00:07:38.204 "zone_append": false, 00:07:38.204 "compare": false, 00:07:38.204 "compare_and_write": false, 00:07:38.204 "abort": false, 00:07:38.204 "seek_hole": false, 00:07:38.204 "seek_data": false, 00:07:38.204 "copy": false, 00:07:38.204 "nvme_iov_md": false 00:07:38.204 }, 00:07:38.204 "memory_domains": [ 00:07:38.204 { 00:07:38.204 "dma_device_id": "system", 00:07:38.204 "dma_device_type": 1 00:07:38.204 }, 00:07:38.204 { 00:07:38.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.204 "dma_device_type": 2 00:07:38.204 }, 00:07:38.204 { 00:07:38.204 "dma_device_id": "system", 00:07:38.204 "dma_device_type": 1 00:07:38.204 }, 00:07:38.204 { 00:07:38.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.204 "dma_device_type": 2 00:07:38.204 } 00:07:38.204 ], 00:07:38.204 "driver_specific": { 00:07:38.204 "raid": { 00:07:38.204 "uuid": "15e9a4ff-828b-4f48-b56d-e88905bf8817", 00:07:38.204 "strip_size_kb": 0, 00:07:38.204 "state": "online", 00:07:38.204 "raid_level": "raid1", 00:07:38.204 "superblock": false, 00:07:38.204 "num_base_bdevs": 2, 00:07:38.204 "num_base_bdevs_discovered": 2, 00:07:38.204 "num_base_bdevs_operational": 2, 00:07:38.204 "base_bdevs_list": [ 00:07:38.204 { 00:07:38.204 "name": "BaseBdev1", 00:07:38.204 "uuid": "7c42d308-326c-43f8-9940-6ecd65804f3f", 00:07:38.204 "is_configured": true, 00:07:38.204 "data_offset": 0, 00:07:38.204 "data_size": 65536 00:07:38.204 }, 00:07:38.204 { 00:07:38.204 "name": "BaseBdev2", 00:07:38.204 "uuid": "0223f2c2-cc9f-4613-8298-4d6f4702af8d", 00:07:38.204 "is_configured": true, 00:07:38.204 "data_offset": 0, 00:07:38.204 "data_size": 65536 00:07:38.204 } 00:07:38.204 ] 00:07:38.204 } 00:07:38.204 } 00:07:38.204 }' 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.204 BaseBdev2' 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.204 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.463 [2024-11-16 18:48:21.730690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.463 "name": "Existed_Raid", 00:07:38.463 "uuid": "15e9a4ff-828b-4f48-b56d-e88905bf8817", 00:07:38.463 "strip_size_kb": 0, 00:07:38.463 "state": "online", 00:07:38.463 "raid_level": "raid1", 00:07:38.463 "superblock": false, 00:07:38.463 "num_base_bdevs": 2, 00:07:38.463 "num_base_bdevs_discovered": 1, 00:07:38.463 "num_base_bdevs_operational": 1, 00:07:38.463 "base_bdevs_list": [ 00:07:38.463 { 00:07:38.463 "name": null, 00:07:38.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.463 "is_configured": false, 00:07:38.463 "data_offset": 0, 00:07:38.463 "data_size": 65536 00:07:38.463 }, 00:07:38.463 { 00:07:38.463 "name": "BaseBdev2", 00:07:38.463 "uuid": "0223f2c2-cc9f-4613-8298-4d6f4702af8d", 00:07:38.463 "is_configured": true, 00:07:38.463 "data_offset": 0, 00:07:38.463 "data_size": 65536 00:07:38.463 } 00:07:38.463 ] 00:07:38.463 }' 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.463 18:48:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.031 [2024-11-16 18:48:22.265378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:39.031 [2024-11-16 18:48:22.265472] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:39.031 [2024-11-16 18:48:22.360393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.031 [2024-11-16 18:48:22.360530] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.031 [2024-11-16 18:48:22.360576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62561 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62561 ']' 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62561 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62561 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62561' 00:07:39.031 killing process with pid 62561 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62561 00:07:39.031 [2024-11-16 18:48:22.453758] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.031 18:48:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62561 00:07:39.031 [2024-11-16 18:48:22.469663] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:40.409 00:07:40.409 real 0m4.752s 00:07:40.409 user 0m6.814s 00:07:40.409 sys 0m0.781s 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.409 ************************************ 00:07:40.409 END TEST raid_state_function_test 00:07:40.409 ************************************ 00:07:40.409 18:48:23 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:07:40.409 18:48:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.409 18:48:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.409 18:48:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.409 ************************************ 00:07:40.409 START TEST raid_state_function_test_sb 00:07:40.409 ************************************ 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62809 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62809' 00:07:40.409 Process raid pid: 62809 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62809 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62809 ']' 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.409 18:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.409 [2024-11-16 18:48:23.688989] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:40.410 [2024-11-16 18:48:23.689253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.410 [2024-11-16 18:48:23.873782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.668 [2024-11-16 18:48:23.982741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.926 [2024-11-16 18:48:24.183702] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.926 [2024-11-16 18:48:24.183783] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.184 [2024-11-16 18:48:24.531118] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.184 [2024-11-16 18:48:24.531166] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.184 [2024-11-16 18:48:24.531177] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.184 [2024-11-16 18:48:24.531186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.184 "name": "Existed_Raid", 00:07:41.184 "uuid": "1545a7c7-97e4-426b-ae53-708be3995a5a", 00:07:41.184 "strip_size_kb": 0, 00:07:41.184 "state": "configuring", 00:07:41.184 "raid_level": "raid1", 00:07:41.184 "superblock": true, 00:07:41.184 "num_base_bdevs": 2, 00:07:41.184 "num_base_bdevs_discovered": 0, 00:07:41.184 "num_base_bdevs_operational": 2, 00:07:41.184 "base_bdevs_list": [ 00:07:41.184 { 00:07:41.184 "name": "BaseBdev1", 00:07:41.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.184 "is_configured": false, 00:07:41.184 "data_offset": 0, 00:07:41.184 "data_size": 0 00:07:41.184 }, 00:07:41.184 { 00:07:41.184 "name": "BaseBdev2", 00:07:41.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.184 "is_configured": false, 00:07:41.184 "data_offset": 0, 00:07:41.184 "data_size": 0 00:07:41.184 } 00:07:41.184 ] 00:07:41.184 }' 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.184 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.760 [2024-11-16 18:48:24.954378] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.760 [2024-11-16 18:48:24.954460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.760 [2024-11-16 18:48:24.962347] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.760 [2024-11-16 18:48:24.962422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.760 [2024-11-16 18:48:24.962449] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.760 [2024-11-16 18:48:24.962473] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.760 18:48:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.760 [2024-11-16 18:48:25.002394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.760 BaseBdev1 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.760 [ 00:07:41.760 { 00:07:41.760 "name": "BaseBdev1", 00:07:41.760 "aliases": [ 00:07:41.760 "ae55a0ea-f32c-45f5-b472-874237a8b5ed" 00:07:41.760 ], 00:07:41.760 "product_name": "Malloc disk", 00:07:41.760 "block_size": 512, 00:07:41.760 "num_blocks": 65536, 00:07:41.760 "uuid": "ae55a0ea-f32c-45f5-b472-874237a8b5ed", 00:07:41.760 "assigned_rate_limits": { 00:07:41.760 "rw_ios_per_sec": 0, 00:07:41.760 "rw_mbytes_per_sec": 0, 00:07:41.760 "r_mbytes_per_sec": 0, 00:07:41.760 "w_mbytes_per_sec": 0 00:07:41.760 }, 00:07:41.760 "claimed": true, 00:07:41.760 "claim_type": "exclusive_write", 00:07:41.760 "zoned": false, 00:07:41.760 "supported_io_types": { 00:07:41.760 "read": true, 00:07:41.760 "write": true, 00:07:41.760 "unmap": true, 00:07:41.760 "flush": true, 00:07:41.760 "reset": true, 00:07:41.760 "nvme_admin": false, 00:07:41.760 "nvme_io": false, 00:07:41.760 "nvme_io_md": false, 00:07:41.760 "write_zeroes": true, 00:07:41.760 "zcopy": true, 00:07:41.760 "get_zone_info": false, 00:07:41.760 "zone_management": false, 00:07:41.760 "zone_append": false, 00:07:41.760 "compare": false, 00:07:41.760 "compare_and_write": false, 00:07:41.760 "abort": true, 00:07:41.760 "seek_hole": false, 00:07:41.760 "seek_data": false, 00:07:41.760 "copy": true, 00:07:41.760 "nvme_iov_md": false 00:07:41.760 }, 00:07:41.760 "memory_domains": [ 00:07:41.760 { 00:07:41.760 "dma_device_id": "system", 00:07:41.760 "dma_device_type": 1 00:07:41.760 }, 00:07:41.760 { 00:07:41.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.760 "dma_device_type": 2 00:07:41.760 } 00:07:41.760 ], 00:07:41.760 "driver_specific": {} 00:07:41.760 } 00:07:41.760 ] 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.760 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.760 "name": "Existed_Raid", 00:07:41.761 "uuid": "139fc2f4-c16e-44fb-8231-23839a8582a8", 00:07:41.761 "strip_size_kb": 0, 00:07:41.761 "state": "configuring", 00:07:41.761 "raid_level": "raid1", 00:07:41.761 "superblock": true, 00:07:41.761 "num_base_bdevs": 2, 00:07:41.761 "num_base_bdevs_discovered": 1, 00:07:41.761 "num_base_bdevs_operational": 2, 00:07:41.761 "base_bdevs_list": [ 00:07:41.761 { 00:07:41.761 "name": "BaseBdev1", 00:07:41.761 "uuid": "ae55a0ea-f32c-45f5-b472-874237a8b5ed", 00:07:41.761 "is_configured": true, 00:07:41.761 "data_offset": 2048, 00:07:41.761 "data_size": 63488 00:07:41.761 }, 00:07:41.761 { 00:07:41.761 "name": "BaseBdev2", 00:07:41.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.761 "is_configured": false, 00:07:41.761 "data_offset": 0, 00:07:41.761 "data_size": 0 00:07:41.761 } 00:07:41.761 ] 00:07:41.761 }' 00:07:41.761 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.761 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.020 [2024-11-16 18:48:25.445675] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:42.020 [2024-11-16 18:48:25.445721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.020 [2024-11-16 18:48:25.457705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.020 [2024-11-16 18:48:25.459406] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:42.020 [2024-11-16 18:48:25.459449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.020 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.021 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.021 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.021 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.021 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.021 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.021 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.021 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.021 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.021 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.280 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.280 "name": "Existed_Raid", 00:07:42.280 "uuid": "d1313c93-3dc5-4b71-a3a6-83be18bd204a", 00:07:42.280 "strip_size_kb": 0, 00:07:42.280 "state": "configuring", 00:07:42.280 "raid_level": "raid1", 00:07:42.280 "superblock": true, 00:07:42.280 "num_base_bdevs": 2, 00:07:42.280 "num_base_bdevs_discovered": 1, 00:07:42.280 "num_base_bdevs_operational": 2, 00:07:42.280 "base_bdevs_list": [ 00:07:42.280 { 00:07:42.280 "name": "BaseBdev1", 00:07:42.280 "uuid": "ae55a0ea-f32c-45f5-b472-874237a8b5ed", 00:07:42.280 "is_configured": true, 00:07:42.280 "data_offset": 2048, 00:07:42.280 "data_size": 63488 00:07:42.280 }, 00:07:42.280 { 00:07:42.280 "name": "BaseBdev2", 00:07:42.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.280 "is_configured": false, 00:07:42.280 "data_offset": 0, 00:07:42.280 "data_size": 0 00:07:42.280 } 00:07:42.280 ] 00:07:42.280 }' 00:07:42.280 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.280 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.540 [2024-11-16 18:48:25.864189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.540 [2024-11-16 18:48:25.864529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.540 [2024-11-16 18:48:25.864580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:42.540 [2024-11-16 18:48:25.864869] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.540 [2024-11-16 18:48:25.865062] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.540 [2024-11-16 18:48:25.865107] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:42.540 BaseBdev2 00:07:42.540 [2024-11-16 18:48:25.865296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.540 [ 00:07:42.540 { 00:07:42.540 "name": "BaseBdev2", 00:07:42.540 "aliases": [ 00:07:42.540 "ecc5a306-8699-4c4e-bf8b-1a4c11f467cb" 00:07:42.540 ], 00:07:42.540 "product_name": "Malloc disk", 00:07:42.540 "block_size": 512, 00:07:42.540 "num_blocks": 65536, 00:07:42.540 "uuid": "ecc5a306-8699-4c4e-bf8b-1a4c11f467cb", 00:07:42.540 "assigned_rate_limits": { 00:07:42.540 "rw_ios_per_sec": 0, 00:07:42.540 "rw_mbytes_per_sec": 0, 00:07:42.540 "r_mbytes_per_sec": 0, 00:07:42.540 "w_mbytes_per_sec": 0 00:07:42.540 }, 00:07:42.540 "claimed": true, 00:07:42.540 "claim_type": "exclusive_write", 00:07:42.540 "zoned": false, 00:07:42.540 "supported_io_types": { 00:07:42.540 "read": true, 00:07:42.540 "write": true, 00:07:42.540 "unmap": true, 00:07:42.540 "flush": true, 00:07:42.540 "reset": true, 00:07:42.540 "nvme_admin": false, 00:07:42.540 "nvme_io": false, 00:07:42.540 "nvme_io_md": false, 00:07:42.540 "write_zeroes": true, 00:07:42.540 "zcopy": true, 00:07:42.540 "get_zone_info": false, 00:07:42.540 "zone_management": false, 00:07:42.540 "zone_append": false, 00:07:42.540 "compare": false, 00:07:42.540 "compare_and_write": false, 00:07:42.540 "abort": true, 00:07:42.540 "seek_hole": false, 00:07:42.540 "seek_data": false, 00:07:42.540 "copy": true, 00:07:42.540 "nvme_iov_md": false 00:07:42.540 }, 00:07:42.540 "memory_domains": [ 00:07:42.540 { 00:07:42.540 "dma_device_id": "system", 00:07:42.540 "dma_device_type": 1 00:07:42.540 }, 00:07:42.540 { 00:07:42.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.540 "dma_device_type": 2 00:07:42.540 } 00:07:42.540 ], 00:07:42.540 "driver_specific": {} 00:07:42.540 } 00:07:42.540 ] 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.540 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.540 "name": "Existed_Raid", 00:07:42.540 "uuid": "d1313c93-3dc5-4b71-a3a6-83be18bd204a", 00:07:42.540 "strip_size_kb": 0, 00:07:42.540 "state": "online", 00:07:42.540 "raid_level": "raid1", 00:07:42.540 "superblock": true, 00:07:42.540 "num_base_bdevs": 2, 00:07:42.540 "num_base_bdevs_discovered": 2, 00:07:42.540 "num_base_bdevs_operational": 2, 00:07:42.540 "base_bdevs_list": [ 00:07:42.540 { 00:07:42.540 "name": "BaseBdev1", 00:07:42.540 "uuid": "ae55a0ea-f32c-45f5-b472-874237a8b5ed", 00:07:42.540 "is_configured": true, 00:07:42.540 "data_offset": 2048, 00:07:42.540 "data_size": 63488 00:07:42.540 }, 00:07:42.540 { 00:07:42.540 "name": "BaseBdev2", 00:07:42.540 "uuid": "ecc5a306-8699-4c4e-bf8b-1a4c11f467cb", 00:07:42.541 "is_configured": true, 00:07:42.541 "data_offset": 2048, 00:07:42.541 "data_size": 63488 00:07:42.541 } 00:07:42.541 ] 00:07:42.541 }' 00:07:42.541 18:48:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.541 18:48:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.800 [2024-11-16 18:48:26.247847] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:42.800 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.059 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.059 "name": "Existed_Raid", 00:07:43.059 "aliases": [ 00:07:43.059 "d1313c93-3dc5-4b71-a3a6-83be18bd204a" 00:07:43.059 ], 00:07:43.059 "product_name": "Raid Volume", 00:07:43.059 "block_size": 512, 00:07:43.059 "num_blocks": 63488, 00:07:43.059 "uuid": "d1313c93-3dc5-4b71-a3a6-83be18bd204a", 00:07:43.059 "assigned_rate_limits": { 00:07:43.059 "rw_ios_per_sec": 0, 00:07:43.059 "rw_mbytes_per_sec": 0, 00:07:43.059 "r_mbytes_per_sec": 0, 00:07:43.059 "w_mbytes_per_sec": 0 00:07:43.059 }, 00:07:43.059 "claimed": false, 00:07:43.059 "zoned": false, 00:07:43.059 "supported_io_types": { 00:07:43.059 "read": true, 00:07:43.059 "write": true, 00:07:43.059 "unmap": false, 00:07:43.059 "flush": false, 00:07:43.060 "reset": true, 00:07:43.060 "nvme_admin": false, 00:07:43.060 "nvme_io": false, 00:07:43.060 "nvme_io_md": false, 00:07:43.060 "write_zeroes": true, 00:07:43.060 "zcopy": false, 00:07:43.060 "get_zone_info": false, 00:07:43.060 "zone_management": false, 00:07:43.060 "zone_append": false, 00:07:43.060 "compare": false, 00:07:43.060 "compare_and_write": false, 00:07:43.060 "abort": false, 00:07:43.060 "seek_hole": false, 00:07:43.060 "seek_data": false, 00:07:43.060 "copy": false, 00:07:43.060 "nvme_iov_md": false 00:07:43.060 }, 00:07:43.060 "memory_domains": [ 00:07:43.060 { 00:07:43.060 "dma_device_id": "system", 00:07:43.060 "dma_device_type": 1 00:07:43.060 }, 00:07:43.060 { 00:07:43.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.060 "dma_device_type": 2 00:07:43.060 }, 00:07:43.060 { 00:07:43.060 "dma_device_id": "system", 00:07:43.060 "dma_device_type": 1 00:07:43.060 }, 00:07:43.060 { 00:07:43.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.060 "dma_device_type": 2 00:07:43.060 } 00:07:43.060 ], 00:07:43.060 "driver_specific": { 00:07:43.060 "raid": { 00:07:43.060 "uuid": "d1313c93-3dc5-4b71-a3a6-83be18bd204a", 00:07:43.060 "strip_size_kb": 0, 00:07:43.060 "state": "online", 00:07:43.060 "raid_level": "raid1", 00:07:43.060 "superblock": true, 00:07:43.060 "num_base_bdevs": 2, 00:07:43.060 "num_base_bdevs_discovered": 2, 00:07:43.060 "num_base_bdevs_operational": 2, 00:07:43.060 "base_bdevs_list": [ 00:07:43.060 { 00:07:43.060 "name": "BaseBdev1", 00:07:43.060 "uuid": "ae55a0ea-f32c-45f5-b472-874237a8b5ed", 00:07:43.060 "is_configured": true, 00:07:43.060 "data_offset": 2048, 00:07:43.060 "data_size": 63488 00:07:43.060 }, 00:07:43.060 { 00:07:43.060 "name": "BaseBdev2", 00:07:43.060 "uuid": "ecc5a306-8699-4c4e-bf8b-1a4c11f467cb", 00:07:43.060 "is_configured": true, 00:07:43.060 "data_offset": 2048, 00:07:43.060 "data_size": 63488 00:07:43.060 } 00:07:43.060 ] 00:07:43.060 } 00:07:43.060 } 00:07:43.060 }' 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:43.060 BaseBdev2' 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.060 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.060 [2024-11-16 18:48:26.483204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.320 "name": "Existed_Raid", 00:07:43.320 "uuid": "d1313c93-3dc5-4b71-a3a6-83be18bd204a", 00:07:43.320 "strip_size_kb": 0, 00:07:43.320 "state": "online", 00:07:43.320 "raid_level": "raid1", 00:07:43.320 "superblock": true, 00:07:43.320 "num_base_bdevs": 2, 00:07:43.320 "num_base_bdevs_discovered": 1, 00:07:43.320 "num_base_bdevs_operational": 1, 00:07:43.320 "base_bdevs_list": [ 00:07:43.320 { 00:07:43.320 "name": null, 00:07:43.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.320 "is_configured": false, 00:07:43.320 "data_offset": 0, 00:07:43.320 "data_size": 63488 00:07:43.320 }, 00:07:43.320 { 00:07:43.320 "name": "BaseBdev2", 00:07:43.320 "uuid": "ecc5a306-8699-4c4e-bf8b-1a4c11f467cb", 00:07:43.320 "is_configured": true, 00:07:43.320 "data_offset": 2048, 00:07:43.320 "data_size": 63488 00:07:43.320 } 00:07:43.320 ] 00:07:43.320 }' 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.320 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.580 18:48:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.580 [2024-11-16 18:48:26.950404] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.580 [2024-11-16 18:48:26.950550] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.580 [2024-11-16 18:48:27.040379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.580 [2024-11-16 18:48:27.040484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:43.580 [2024-11-16 18:48:27.040527] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:43.580 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.580 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.580 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.580 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.580 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.580 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.580 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62809 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62809 ']' 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62809 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62809 00:07:43.839 killing process with pid 62809 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62809' 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62809 00:07:43.839 [2024-11-16 18:48:27.135543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.839 18:48:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62809 00:07:43.839 [2024-11-16 18:48:27.151335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.776 ************************************ 00:07:44.776 END TEST raid_state_function_test_sb 00:07:44.776 ************************************ 00:07:44.776 18:48:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:44.776 00:07:44.776 real 0m4.619s 00:07:44.776 user 0m6.584s 00:07:44.776 sys 0m0.777s 00:07:44.776 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.776 18:48:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.035 18:48:28 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:07:45.035 18:48:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:45.035 18:48:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.035 18:48:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.035 ************************************ 00:07:45.035 START TEST raid_superblock_test 00:07:45.035 ************************************ 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63056 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63056 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63056 ']' 00:07:45.035 18:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.036 18:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.036 18:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.036 18:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.036 18:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.036 [2024-11-16 18:48:28.347986] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:45.036 [2024-11-16 18:48:28.348197] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63056 ] 00:07:45.294 [2024-11-16 18:48:28.522445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.294 [2024-11-16 18:48:28.630138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.553 [2024-11-16 18:48:28.804253] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.553 [2024-11-16 18:48:28.804391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.813 malloc1 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.813 [2024-11-16 18:48:29.208148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:45.813 [2024-11-16 18:48:29.208280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.813 [2024-11-16 18:48:29.208330] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:45.813 [2024-11-16 18:48:29.208369] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.813 [2024-11-16 18:48:29.210497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.813 [2024-11-16 18:48:29.210565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:45.813 pt1 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.813 malloc2 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.813 [2024-11-16 18:48:29.266226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:45.813 [2024-11-16 18:48:29.266277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:45.813 [2024-11-16 18:48:29.266297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:45.813 [2024-11-16 18:48:29.266305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:45.813 [2024-11-16 18:48:29.268307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:45.813 [2024-11-16 18:48:29.268341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:45.813 pt2 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.813 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.813 [2024-11-16 18:48:29.278257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:45.813 [2024-11-16 18:48:29.279981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:45.813 [2024-11-16 18:48:29.280130] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:45.813 [2024-11-16 18:48:29.280147] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:45.813 [2024-11-16 18:48:29.280363] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:45.813 [2024-11-16 18:48:29.280519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:45.813 [2024-11-16 18:48:29.280533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:45.813 [2024-11-16 18:48:29.280684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.074 "name": "raid_bdev1", 00:07:46.074 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:46.074 "strip_size_kb": 0, 00:07:46.074 "state": "online", 00:07:46.074 "raid_level": "raid1", 00:07:46.074 "superblock": true, 00:07:46.074 "num_base_bdevs": 2, 00:07:46.074 "num_base_bdevs_discovered": 2, 00:07:46.074 "num_base_bdevs_operational": 2, 00:07:46.074 "base_bdevs_list": [ 00:07:46.074 { 00:07:46.074 "name": "pt1", 00:07:46.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.074 "is_configured": true, 00:07:46.074 "data_offset": 2048, 00:07:46.074 "data_size": 63488 00:07:46.074 }, 00:07:46.074 { 00:07:46.074 "name": "pt2", 00:07:46.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.074 "is_configured": true, 00:07:46.074 "data_offset": 2048, 00:07:46.074 "data_size": 63488 00:07:46.074 } 00:07:46.074 ] 00:07:46.074 }' 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.074 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.334 [2024-11-16 18:48:29.709747] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.334 "name": "raid_bdev1", 00:07:46.334 "aliases": [ 00:07:46.334 "ab00a9a0-a7ac-4f27-bafe-bc504936d768" 00:07:46.334 ], 00:07:46.334 "product_name": "Raid Volume", 00:07:46.334 "block_size": 512, 00:07:46.334 "num_blocks": 63488, 00:07:46.334 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:46.334 "assigned_rate_limits": { 00:07:46.334 "rw_ios_per_sec": 0, 00:07:46.334 "rw_mbytes_per_sec": 0, 00:07:46.334 "r_mbytes_per_sec": 0, 00:07:46.334 "w_mbytes_per_sec": 0 00:07:46.334 }, 00:07:46.334 "claimed": false, 00:07:46.334 "zoned": false, 00:07:46.334 "supported_io_types": { 00:07:46.334 "read": true, 00:07:46.334 "write": true, 00:07:46.334 "unmap": false, 00:07:46.334 "flush": false, 00:07:46.334 "reset": true, 00:07:46.334 "nvme_admin": false, 00:07:46.334 "nvme_io": false, 00:07:46.334 "nvme_io_md": false, 00:07:46.334 "write_zeroes": true, 00:07:46.334 "zcopy": false, 00:07:46.334 "get_zone_info": false, 00:07:46.334 "zone_management": false, 00:07:46.334 "zone_append": false, 00:07:46.334 "compare": false, 00:07:46.334 "compare_and_write": false, 00:07:46.334 "abort": false, 00:07:46.334 "seek_hole": false, 00:07:46.334 "seek_data": false, 00:07:46.334 "copy": false, 00:07:46.334 "nvme_iov_md": false 00:07:46.334 }, 00:07:46.334 "memory_domains": [ 00:07:46.334 { 00:07:46.334 "dma_device_id": "system", 00:07:46.334 "dma_device_type": 1 00:07:46.334 }, 00:07:46.334 { 00:07:46.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.334 "dma_device_type": 2 00:07:46.334 }, 00:07:46.334 { 00:07:46.334 "dma_device_id": "system", 00:07:46.334 "dma_device_type": 1 00:07:46.334 }, 00:07:46.334 { 00:07:46.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.334 "dma_device_type": 2 00:07:46.334 } 00:07:46.334 ], 00:07:46.334 "driver_specific": { 00:07:46.334 "raid": { 00:07:46.334 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:46.334 "strip_size_kb": 0, 00:07:46.334 "state": "online", 00:07:46.334 "raid_level": "raid1", 00:07:46.334 "superblock": true, 00:07:46.334 "num_base_bdevs": 2, 00:07:46.334 "num_base_bdevs_discovered": 2, 00:07:46.334 "num_base_bdevs_operational": 2, 00:07:46.334 "base_bdevs_list": [ 00:07:46.334 { 00:07:46.334 "name": "pt1", 00:07:46.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.334 "is_configured": true, 00:07:46.334 "data_offset": 2048, 00:07:46.334 "data_size": 63488 00:07:46.334 }, 00:07:46.334 { 00:07:46.334 "name": "pt2", 00:07:46.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.334 "is_configured": true, 00:07:46.334 "data_offset": 2048, 00:07:46.334 "data_size": 63488 00:07:46.334 } 00:07:46.334 ] 00:07:46.334 } 00:07:46.334 } 00:07:46.334 }' 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.334 pt2' 00:07:46.334 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.594 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.595 [2024-11-16 18:48:29.957329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.595 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.595 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=ab00a9a0-a7ac-4f27-bafe-bc504936d768 00:07:46.595 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z ab00a9a0-a7ac-4f27-bafe-bc504936d768 ']' 00:07:46.595 18:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.595 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.595 18:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.595 [2024-11-16 18:48:30.000969] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.595 [2024-11-16 18:48:30.001037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.595 [2024-11-16 18:48:30.001160] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.595 [2024-11-16 18:48:30.001219] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.595 [2024-11-16 18:48:30.001231] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.595 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.854 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.854 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.854 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.854 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.854 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.854 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.855 [2024-11-16 18:48:30.136741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:46.855 [2024-11-16 18:48:30.138549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:46.855 [2024-11-16 18:48:30.138612] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:46.855 [2024-11-16 18:48:30.138676] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:46.855 [2024-11-16 18:48:30.138692] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.855 [2024-11-16 18:48:30.138702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:46.855 request: 00:07:46.855 { 00:07:46.855 "name": "raid_bdev1", 00:07:46.855 "raid_level": "raid1", 00:07:46.855 "base_bdevs": [ 00:07:46.855 "malloc1", 00:07:46.855 "malloc2" 00:07:46.855 ], 00:07:46.855 "superblock": false, 00:07:46.855 "method": "bdev_raid_create", 00:07:46.855 "req_id": 1 00:07:46.855 } 00:07:46.855 Got JSON-RPC error response 00:07:46.855 response: 00:07:46.855 { 00:07:46.855 "code": -17, 00:07:46.855 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:46.855 } 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.855 [2024-11-16 18:48:30.200609] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.855 [2024-11-16 18:48:30.200729] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.855 [2024-11-16 18:48:30.200767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:46.855 [2024-11-16 18:48:30.200802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.855 [2024-11-16 18:48:30.202963] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.855 [2024-11-16 18:48:30.203049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.855 [2024-11-16 18:48:30.203145] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:46.855 [2024-11-16 18:48:30.203238] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.855 pt1 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.855 "name": "raid_bdev1", 00:07:46.855 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:46.855 "strip_size_kb": 0, 00:07:46.855 "state": "configuring", 00:07:46.855 "raid_level": "raid1", 00:07:46.855 "superblock": true, 00:07:46.855 "num_base_bdevs": 2, 00:07:46.855 "num_base_bdevs_discovered": 1, 00:07:46.855 "num_base_bdevs_operational": 2, 00:07:46.855 "base_bdevs_list": [ 00:07:46.855 { 00:07:46.855 "name": "pt1", 00:07:46.855 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.855 "is_configured": true, 00:07:46.855 "data_offset": 2048, 00:07:46.855 "data_size": 63488 00:07:46.855 }, 00:07:46.855 { 00:07:46.855 "name": null, 00:07:46.855 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.855 "is_configured": false, 00:07:46.855 "data_offset": 2048, 00:07:46.855 "data_size": 63488 00:07:46.855 } 00:07:46.855 ] 00:07:46.855 }' 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.855 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.424 [2024-11-16 18:48:30.639891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.424 [2024-11-16 18:48:30.639998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.424 [2024-11-16 18:48:30.640022] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:47.424 [2024-11-16 18:48:30.640033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.424 [2024-11-16 18:48:30.640458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.424 [2024-11-16 18:48:30.640477] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.424 [2024-11-16 18:48:30.640550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.424 [2024-11-16 18:48:30.640574] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.424 [2024-11-16 18:48:30.640705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.424 [2024-11-16 18:48:30.640717] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:47.424 [2024-11-16 18:48:30.640940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:47.424 [2024-11-16 18:48:30.641080] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.424 [2024-11-16 18:48:30.641095] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:47.424 [2024-11-16 18:48:30.641249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.424 pt2 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.424 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.425 "name": "raid_bdev1", 00:07:47.425 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:47.425 "strip_size_kb": 0, 00:07:47.425 "state": "online", 00:07:47.425 "raid_level": "raid1", 00:07:47.425 "superblock": true, 00:07:47.425 "num_base_bdevs": 2, 00:07:47.425 "num_base_bdevs_discovered": 2, 00:07:47.425 "num_base_bdevs_operational": 2, 00:07:47.425 "base_bdevs_list": [ 00:07:47.425 { 00:07:47.425 "name": "pt1", 00:07:47.425 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.425 "is_configured": true, 00:07:47.425 "data_offset": 2048, 00:07:47.425 "data_size": 63488 00:07:47.425 }, 00:07:47.425 { 00:07:47.425 "name": "pt2", 00:07:47.425 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.425 "is_configured": true, 00:07:47.425 "data_offset": 2048, 00:07:47.425 "data_size": 63488 00:07:47.425 } 00:07:47.425 ] 00:07:47.425 }' 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.425 18:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.684 [2024-11-16 18:48:31.043411] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.684 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.684 "name": "raid_bdev1", 00:07:47.684 "aliases": [ 00:07:47.684 "ab00a9a0-a7ac-4f27-bafe-bc504936d768" 00:07:47.684 ], 00:07:47.684 "product_name": "Raid Volume", 00:07:47.684 "block_size": 512, 00:07:47.684 "num_blocks": 63488, 00:07:47.684 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:47.684 "assigned_rate_limits": { 00:07:47.684 "rw_ios_per_sec": 0, 00:07:47.684 "rw_mbytes_per_sec": 0, 00:07:47.684 "r_mbytes_per_sec": 0, 00:07:47.684 "w_mbytes_per_sec": 0 00:07:47.684 }, 00:07:47.684 "claimed": false, 00:07:47.684 "zoned": false, 00:07:47.684 "supported_io_types": { 00:07:47.684 "read": true, 00:07:47.684 "write": true, 00:07:47.684 "unmap": false, 00:07:47.684 "flush": false, 00:07:47.684 "reset": true, 00:07:47.684 "nvme_admin": false, 00:07:47.684 "nvme_io": false, 00:07:47.685 "nvme_io_md": false, 00:07:47.685 "write_zeroes": true, 00:07:47.685 "zcopy": false, 00:07:47.685 "get_zone_info": false, 00:07:47.685 "zone_management": false, 00:07:47.685 "zone_append": false, 00:07:47.685 "compare": false, 00:07:47.685 "compare_and_write": false, 00:07:47.685 "abort": false, 00:07:47.685 "seek_hole": false, 00:07:47.685 "seek_data": false, 00:07:47.685 "copy": false, 00:07:47.685 "nvme_iov_md": false 00:07:47.685 }, 00:07:47.685 "memory_domains": [ 00:07:47.685 { 00:07:47.685 "dma_device_id": "system", 00:07:47.685 "dma_device_type": 1 00:07:47.685 }, 00:07:47.685 { 00:07:47.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.685 "dma_device_type": 2 00:07:47.685 }, 00:07:47.685 { 00:07:47.685 "dma_device_id": "system", 00:07:47.685 "dma_device_type": 1 00:07:47.685 }, 00:07:47.685 { 00:07:47.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.685 "dma_device_type": 2 00:07:47.685 } 00:07:47.685 ], 00:07:47.685 "driver_specific": { 00:07:47.685 "raid": { 00:07:47.685 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:47.685 "strip_size_kb": 0, 00:07:47.685 "state": "online", 00:07:47.685 "raid_level": "raid1", 00:07:47.685 "superblock": true, 00:07:47.685 "num_base_bdevs": 2, 00:07:47.685 "num_base_bdevs_discovered": 2, 00:07:47.685 "num_base_bdevs_operational": 2, 00:07:47.685 "base_bdevs_list": [ 00:07:47.685 { 00:07:47.685 "name": "pt1", 00:07:47.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.685 "is_configured": true, 00:07:47.685 "data_offset": 2048, 00:07:47.685 "data_size": 63488 00:07:47.685 }, 00:07:47.685 { 00:07:47.685 "name": "pt2", 00:07:47.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.685 "is_configured": true, 00:07:47.685 "data_offset": 2048, 00:07:47.685 "data_size": 63488 00:07:47.685 } 00:07:47.685 ] 00:07:47.685 } 00:07:47.685 } 00:07:47.685 }' 00:07:47.685 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.685 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:47.685 pt2' 00:07:47.685 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.685 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.685 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.685 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.685 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:47.685 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.685 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:47.945 [2024-11-16 18:48:31.227080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' ab00a9a0-a7ac-4f27-bafe-bc504936d768 '!=' ab00a9a0-a7ac-4f27-bafe-bc504936d768 ']' 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.945 [2024-11-16 18:48:31.274802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.945 "name": "raid_bdev1", 00:07:47.945 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:47.945 "strip_size_kb": 0, 00:07:47.945 "state": "online", 00:07:47.945 "raid_level": "raid1", 00:07:47.945 "superblock": true, 00:07:47.945 "num_base_bdevs": 2, 00:07:47.945 "num_base_bdevs_discovered": 1, 00:07:47.945 "num_base_bdevs_operational": 1, 00:07:47.945 "base_bdevs_list": [ 00:07:47.945 { 00:07:47.945 "name": null, 00:07:47.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.945 "is_configured": false, 00:07:47.945 "data_offset": 0, 00:07:47.945 "data_size": 63488 00:07:47.945 }, 00:07:47.945 { 00:07:47.945 "name": "pt2", 00:07:47.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.945 "is_configured": true, 00:07:47.945 "data_offset": 2048, 00:07:47.945 "data_size": 63488 00:07:47.945 } 00:07:47.945 ] 00:07:47.945 }' 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.945 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.514 [2024-11-16 18:48:31.730028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.514 [2024-11-16 18:48:31.730095] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.514 [2024-11-16 18:48:31.730189] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.514 [2024-11-16 18:48:31.730252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.514 [2024-11-16 18:48:31.730298] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.514 [2024-11-16 18:48:31.789912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:48.514 [2024-11-16 18:48:31.789968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.514 [2024-11-16 18:48:31.789985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:07:48.514 [2024-11-16 18:48:31.789995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.514 [2024-11-16 18:48:31.792129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.514 [2024-11-16 18:48:31.792224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:48.514 [2024-11-16 18:48:31.792323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:48.514 [2024-11-16 18:48:31.792374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.514 [2024-11-16 18:48:31.792509] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:07:48.514 [2024-11-16 18:48:31.792524] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:48.514 [2024-11-16 18:48:31.792810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:48.514 [2024-11-16 18:48:31.792984] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:07:48.514 [2024-11-16 18:48:31.792994] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:07:48.514 [2024-11-16 18:48:31.793156] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.514 pt2 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.514 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:48.515 "name": "raid_bdev1", 00:07:48.515 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:48.515 "strip_size_kb": 0, 00:07:48.515 "state": "online", 00:07:48.515 "raid_level": "raid1", 00:07:48.515 "superblock": true, 00:07:48.515 "num_base_bdevs": 2, 00:07:48.515 "num_base_bdevs_discovered": 1, 00:07:48.515 "num_base_bdevs_operational": 1, 00:07:48.515 "base_bdevs_list": [ 00:07:48.515 { 00:07:48.515 "name": null, 00:07:48.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:48.515 "is_configured": false, 00:07:48.515 "data_offset": 2048, 00:07:48.515 "data_size": 63488 00:07:48.515 }, 00:07:48.515 { 00:07:48.515 "name": "pt2", 00:07:48.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:48.515 "is_configured": true, 00:07:48.515 "data_offset": 2048, 00:07:48.515 "data_size": 63488 00:07:48.515 } 00:07:48.515 ] 00:07:48.515 }' 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:48.515 18:48:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.775 [2024-11-16 18:48:32.185186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.775 [2024-11-16 18:48:32.185252] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:48.775 [2024-11-16 18:48:32.185322] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.775 [2024-11-16 18:48:32.185382] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.775 [2024-11-16 18:48:32.185436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.775 [2024-11-16 18:48:32.233149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:48.775 [2024-11-16 18:48:32.233253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:48.775 [2024-11-16 18:48:32.233287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:07:48.775 [2024-11-16 18:48:32.233315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:48.775 [2024-11-16 18:48:32.235510] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:48.775 [2024-11-16 18:48:32.235578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:48.775 [2024-11-16 18:48:32.235684] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:48.775 [2024-11-16 18:48:32.235757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:48.775 [2024-11-16 18:48:32.235943] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:07:48.775 [2024-11-16 18:48:32.236003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:48.775 [2024-11-16 18:48:32.236045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:07:48.775 [2024-11-16 18:48:32.236146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:48.775 [2024-11-16 18:48:32.236268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:07:48.775 [2024-11-16 18:48:32.236309] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:48.775 [2024-11-16 18:48:32.236594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:07:48.775 [2024-11-16 18:48:32.236809] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:07:48.775 [2024-11-16 18:48:32.236867] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:07:48.775 [2024-11-16 18:48:32.237041] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.775 pt1 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:48.775 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:49.034 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.034 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.034 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.034 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.034 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:49.034 "name": "raid_bdev1", 00:07:49.034 "uuid": "ab00a9a0-a7ac-4f27-bafe-bc504936d768", 00:07:49.034 "strip_size_kb": 0, 00:07:49.034 "state": "online", 00:07:49.034 "raid_level": "raid1", 00:07:49.034 "superblock": true, 00:07:49.034 "num_base_bdevs": 2, 00:07:49.034 "num_base_bdevs_discovered": 1, 00:07:49.034 "num_base_bdevs_operational": 1, 00:07:49.034 "base_bdevs_list": [ 00:07:49.034 { 00:07:49.034 "name": null, 00:07:49.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:49.034 "is_configured": false, 00:07:49.034 "data_offset": 2048, 00:07:49.034 "data_size": 63488 00:07:49.034 }, 00:07:49.034 { 00:07:49.034 "name": "pt2", 00:07:49.034 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:49.034 "is_configured": true, 00:07:49.034 "data_offset": 2048, 00:07:49.034 "data_size": 63488 00:07:49.034 } 00:07:49.035 ] 00:07:49.035 }' 00:07:49.035 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:49.035 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.294 [2024-11-16 18:48:32.712536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' ab00a9a0-a7ac-4f27-bafe-bc504936d768 '!=' ab00a9a0-a7ac-4f27-bafe-bc504936d768 ']' 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63056 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63056 ']' 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63056 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.294 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63056 00:07:49.554 killing process with pid 63056 00:07:49.554 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.554 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.554 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63056' 00:07:49.554 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63056 00:07:49.554 [2024-11-16 18:48:32.785399] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:49.554 [2024-11-16 18:48:32.785477] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:49.554 [2024-11-16 18:48:32.785524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:49.554 [2024-11-16 18:48:32.785538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:07:49.554 18:48:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63056 00:07:49.554 [2024-11-16 18:48:32.980959] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.965 18:48:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:50.965 00:07:50.965 real 0m5.755s 00:07:50.965 user 0m8.720s 00:07:50.965 sys 0m0.990s 00:07:50.965 18:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:50.965 18:48:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.965 ************************************ 00:07:50.965 END TEST raid_superblock_test 00:07:50.965 ************************************ 00:07:50.965 18:48:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:07:50.965 18:48:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:50.965 18:48:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.965 18:48:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.965 ************************************ 00:07:50.965 START TEST raid_read_error_test 00:07:50.965 ************************************ 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YsCTbvtfqs 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63380 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63380 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63380 ']' 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.965 18:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.965 [2024-11-16 18:48:34.184289] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:50.966 [2024-11-16 18:48:34.184481] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63380 ] 00:07:50.966 [2024-11-16 18:48:34.351103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.225 [2024-11-16 18:48:34.457250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.225 [2024-11-16 18:48:34.658568] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.225 [2024-11-16 18:48:34.658656] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.795 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.795 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 BaseBdev1_malloc 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 true 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 [2024-11-16 18:48:35.066095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:51.796 [2024-11-16 18:48:35.066145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.796 [2024-11-16 18:48:35.066163] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:51.796 [2024-11-16 18:48:35.066173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.796 [2024-11-16 18:48:35.068142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.796 [2024-11-16 18:48:35.068184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:51.796 BaseBdev1 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 BaseBdev2_malloc 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 true 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 [2024-11-16 18:48:35.129414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:51.796 [2024-11-16 18:48:35.129473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:51.796 [2024-11-16 18:48:35.129488] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:51.796 [2024-11-16 18:48:35.129498] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:51.796 [2024-11-16 18:48:35.131528] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:51.796 [2024-11-16 18:48:35.131566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:51.796 BaseBdev2 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 [2024-11-16 18:48:35.141442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.796 [2024-11-16 18:48:35.143199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.796 [2024-11-16 18:48:35.143379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:51.796 [2024-11-16 18:48:35.143394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:51.796 [2024-11-16 18:48:35.143614] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:51.796 [2024-11-16 18:48:35.143792] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:51.796 [2024-11-16 18:48:35.143802] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:51.796 [2024-11-16 18:48:35.143948] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.796 "name": "raid_bdev1", 00:07:51.796 "uuid": "2cae9d84-bd95-42ba-bdfb-d2eee739da20", 00:07:51.796 "strip_size_kb": 0, 00:07:51.796 "state": "online", 00:07:51.796 "raid_level": "raid1", 00:07:51.796 "superblock": true, 00:07:51.796 "num_base_bdevs": 2, 00:07:51.796 "num_base_bdevs_discovered": 2, 00:07:51.796 "num_base_bdevs_operational": 2, 00:07:51.796 "base_bdevs_list": [ 00:07:51.796 { 00:07:51.796 "name": "BaseBdev1", 00:07:51.796 "uuid": "da9c2ae9-8b18-5bbe-95ad-75a672a9a853", 00:07:51.796 "is_configured": true, 00:07:51.796 "data_offset": 2048, 00:07:51.796 "data_size": 63488 00:07:51.796 }, 00:07:51.796 { 00:07:51.796 "name": "BaseBdev2", 00:07:51.796 "uuid": "f4f9566a-cd1f-5dd6-b39e-e52228933a73", 00:07:51.796 "is_configured": true, 00:07:51.796 "data_offset": 2048, 00:07:51.796 "data_size": 63488 00:07:51.796 } 00:07:51.796 ] 00:07:51.796 }' 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.796 18:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.366 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:52.366 18:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:52.366 [2024-11-16 18:48:35.641889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:53.307 "name": "raid_bdev1", 00:07:53.307 "uuid": "2cae9d84-bd95-42ba-bdfb-d2eee739da20", 00:07:53.307 "strip_size_kb": 0, 00:07:53.307 "state": "online", 00:07:53.307 "raid_level": "raid1", 00:07:53.307 "superblock": true, 00:07:53.307 "num_base_bdevs": 2, 00:07:53.307 "num_base_bdevs_discovered": 2, 00:07:53.307 "num_base_bdevs_operational": 2, 00:07:53.307 "base_bdevs_list": [ 00:07:53.307 { 00:07:53.307 "name": "BaseBdev1", 00:07:53.307 "uuid": "da9c2ae9-8b18-5bbe-95ad-75a672a9a853", 00:07:53.307 "is_configured": true, 00:07:53.307 "data_offset": 2048, 00:07:53.307 "data_size": 63488 00:07:53.307 }, 00:07:53.307 { 00:07:53.307 "name": "BaseBdev2", 00:07:53.307 "uuid": "f4f9566a-cd1f-5dd6-b39e-e52228933a73", 00:07:53.307 "is_configured": true, 00:07:53.307 "data_offset": 2048, 00:07:53.307 "data_size": 63488 00:07:53.307 } 00:07:53.307 ] 00:07:53.307 }' 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:53.307 18:48:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.567 18:48:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:53.567 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.567 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.567 [2024-11-16 18:48:37.007357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:53.567 [2024-11-16 18:48:37.007409] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:53.567 [2024-11-16 18:48:37.009983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:53.567 [2024-11-16 18:48:37.010026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:53.567 [2024-11-16 18:48:37.010101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:53.567 [2024-11-16 18:48:37.010113] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:53.567 { 00:07:53.567 "results": [ 00:07:53.567 { 00:07:53.567 "job": "raid_bdev1", 00:07:53.567 "core_mask": "0x1", 00:07:53.567 "workload": "randrw", 00:07:53.567 "percentage": 50, 00:07:53.567 "status": "finished", 00:07:53.567 "queue_depth": 1, 00:07:53.567 "io_size": 131072, 00:07:53.567 "runtime": 1.36623, 00:07:53.567 "iops": 19497.449184983496, 00:07:53.567 "mibps": 2437.181148122937, 00:07:53.567 "io_failed": 0, 00:07:53.567 "io_timeout": 0, 00:07:53.567 "avg_latency_us": 48.89260356630103, 00:07:53.567 "min_latency_us": 21.463755458515283, 00:07:53.567 "max_latency_us": 1302.134497816594 00:07:53.567 } 00:07:53.567 ], 00:07:53.567 "core_count": 1 00:07:53.567 } 00:07:53.567 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.567 18:48:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63380 00:07:53.567 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63380 ']' 00:07:53.567 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63380 00:07:53.567 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:53.567 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.567 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63380 00:07:53.827 killing process with pid 63380 00:07:53.827 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.827 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.827 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63380' 00:07:53.827 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63380 00:07:53.827 [2024-11-16 18:48:37.046128] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.827 18:48:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63380 00:07:53.827 [2024-11-16 18:48:37.176898] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YsCTbvtfqs 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:55.209 00:07:55.209 real 0m4.205s 00:07:55.209 user 0m5.024s 00:07:55.209 sys 0m0.529s 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.209 ************************************ 00:07:55.209 END TEST raid_read_error_test 00:07:55.209 ************************************ 00:07:55.209 18:48:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.209 18:48:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:07:55.209 18:48:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:55.209 18:48:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:55.209 18:48:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:55.209 ************************************ 00:07:55.209 START TEST raid_write_error_test 00:07:55.209 ************************************ 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.eR0V3DpMZ4 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63520 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63520 00:07:55.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63520 ']' 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:55.209 18:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.209 [2024-11-16 18:48:38.464688] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:55.209 [2024-11-16 18:48:38.464795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63520 ] 00:07:55.209 [2024-11-16 18:48:38.635989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.469 [2024-11-16 18:48:38.745068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.469 [2024-11-16 18:48:38.933118] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.469 [2024-11-16 18:48:38.933150] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.038 BaseBdev1_malloc 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.038 true 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.038 [2024-11-16 18:48:39.326087] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:56.038 [2024-11-16 18:48:39.326190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.038 [2024-11-16 18:48:39.326212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:56.038 [2024-11-16 18:48:39.326222] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.038 [2024-11-16 18:48:39.328255] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.038 [2024-11-16 18:48:39.328296] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:56.038 BaseBdev1 00:07:56.038 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.039 BaseBdev2_malloc 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.039 true 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.039 [2024-11-16 18:48:39.390862] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:56.039 [2024-11-16 18:48:39.390912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:56.039 [2024-11-16 18:48:39.390927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:56.039 [2024-11-16 18:48:39.390936] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:56.039 [2024-11-16 18:48:39.392931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:56.039 [2024-11-16 18:48:39.393018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:56.039 BaseBdev2 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.039 [2024-11-16 18:48:39.402895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.039 [2024-11-16 18:48:39.404706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:56.039 [2024-11-16 18:48:39.404881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:56.039 [2024-11-16 18:48:39.404897] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:07:56.039 [2024-11-16 18:48:39.405114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:56.039 [2024-11-16 18:48:39.405277] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:56.039 [2024-11-16 18:48:39.405287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:56.039 [2024-11-16 18:48:39.405407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.039 "name": "raid_bdev1", 00:07:56.039 "uuid": "d36626d4-dd12-459b-9ec0-155eaf7a5df0", 00:07:56.039 "strip_size_kb": 0, 00:07:56.039 "state": "online", 00:07:56.039 "raid_level": "raid1", 00:07:56.039 "superblock": true, 00:07:56.039 "num_base_bdevs": 2, 00:07:56.039 "num_base_bdevs_discovered": 2, 00:07:56.039 "num_base_bdevs_operational": 2, 00:07:56.039 "base_bdevs_list": [ 00:07:56.039 { 00:07:56.039 "name": "BaseBdev1", 00:07:56.039 "uuid": "37abbd6d-011a-51a5-b558-762e38768f3b", 00:07:56.039 "is_configured": true, 00:07:56.039 "data_offset": 2048, 00:07:56.039 "data_size": 63488 00:07:56.039 }, 00:07:56.039 { 00:07:56.039 "name": "BaseBdev2", 00:07:56.039 "uuid": "2e17e0d9-3725-5dd0-974a-b751d402c7e4", 00:07:56.039 "is_configured": true, 00:07:56.039 "data_offset": 2048, 00:07:56.039 "data_size": 63488 00:07:56.039 } 00:07:56.039 ] 00:07:56.039 }' 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.039 18:48:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.608 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:56.608 18:48:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:56.608 [2024-11-16 18:48:39.935324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.548 [2024-11-16 18:48:40.862924] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:07:57.548 [2024-11-16 18:48:40.863094] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.548 [2024-11-16 18:48:40.863317] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.548 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.549 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:57.549 18:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:57.549 18:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:57.549 18:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:57.549 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.549 "name": "raid_bdev1", 00:07:57.549 "uuid": "d36626d4-dd12-459b-9ec0-155eaf7a5df0", 00:07:57.549 "strip_size_kb": 0, 00:07:57.549 "state": "online", 00:07:57.549 "raid_level": "raid1", 00:07:57.549 "superblock": true, 00:07:57.549 "num_base_bdevs": 2, 00:07:57.549 "num_base_bdevs_discovered": 1, 00:07:57.549 "num_base_bdevs_operational": 1, 00:07:57.549 "base_bdevs_list": [ 00:07:57.549 { 00:07:57.549 "name": null, 00:07:57.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.549 "is_configured": false, 00:07:57.549 "data_offset": 0, 00:07:57.549 "data_size": 63488 00:07:57.549 }, 00:07:57.549 { 00:07:57.549 "name": "BaseBdev2", 00:07:57.549 "uuid": "2e17e0d9-3725-5dd0-974a-b751d402c7e4", 00:07:57.549 "is_configured": true, 00:07:57.549 "data_offset": 2048, 00:07:57.549 "data_size": 63488 00:07:57.549 } 00:07:57.549 ] 00:07:57.549 }' 00:07:57.549 18:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.549 18:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.118 18:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:58.118 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:58.118 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.118 [2024-11-16 18:48:41.319722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:58.118 [2024-11-16 18:48:41.319752] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.118 { 00:07:58.118 "results": [ 00:07:58.118 { 00:07:58.118 "job": "raid_bdev1", 00:07:58.118 "core_mask": "0x1", 00:07:58.118 "workload": "randrw", 00:07:58.118 "percentage": 50, 00:07:58.119 "status": "finished", 00:07:58.119 "queue_depth": 1, 00:07:58.119 "io_size": 131072, 00:07:58.119 "runtime": 1.385224, 00:07:58.119 "iops": 22903.876918101334, 00:07:58.119 "mibps": 2862.9846147626668, 00:07:58.119 "io_failed": 0, 00:07:58.119 "io_timeout": 0, 00:07:58.119 "avg_latency_us": 41.23879664985797, 00:07:58.119 "min_latency_us": 21.016593886462882, 00:07:58.119 "max_latency_us": 1359.3711790393013 00:07:58.119 } 00:07:58.119 ], 00:07:58.119 "core_count": 1 00:07:58.119 } 00:07:58.119 [2024-11-16 18:48:41.322246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.119 [2024-11-16 18:48:41.322288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.119 [2024-11-16 18:48:41.322343] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:58.119 [2024-11-16 18:48:41.322353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63520 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63520 ']' 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63520 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63520 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63520' 00:07:58.119 killing process with pid 63520 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63520 00:07:58.119 [2024-11-16 18:48:41.369154] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.119 18:48:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63520 00:07:58.119 [2024-11-16 18:48:41.497364] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.eR0V3DpMZ4 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:07:59.499 00:07:59.499 real 0m4.252s 00:07:59.499 user 0m5.109s 00:07:59.499 sys 0m0.514s 00:07:59.499 ************************************ 00:07:59.499 END TEST raid_write_error_test 00:07:59.499 ************************************ 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:59.499 18:48:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.499 18:48:42 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:59.499 18:48:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:59.499 18:48:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:07:59.499 18:48:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:59.499 18:48:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:59.499 18:48:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.499 ************************************ 00:07:59.499 START TEST raid_state_function_test 00:07:59.499 ************************************ 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63664 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63664' 00:07:59.499 Process raid pid: 63664 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63664 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63664 ']' 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:59.499 18:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.499 [2024-11-16 18:48:42.778087] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:59.499 [2024-11-16 18:48:42.778285] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.499 [2024-11-16 18:48:42.949414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.761 [2024-11-16 18:48:43.061016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.021 [2024-11-16 18:48:43.249348] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.021 [2024-11-16 18:48:43.249433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.280 18:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.280 18:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:00.280 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:00.280 18:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.280 18:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.280 [2024-11-16 18:48:43.616881] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.280 [2024-11-16 18:48:43.617015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.280 [2024-11-16 18:48:43.617030] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.281 [2024-11-16 18:48:43.617040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.281 [2024-11-16 18:48:43.617047] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:00.281 [2024-11-16 18:48:43.617055] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.281 "name": "Existed_Raid", 00:08:00.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.281 "strip_size_kb": 64, 00:08:00.281 "state": "configuring", 00:08:00.281 "raid_level": "raid0", 00:08:00.281 "superblock": false, 00:08:00.281 "num_base_bdevs": 3, 00:08:00.281 "num_base_bdevs_discovered": 0, 00:08:00.281 "num_base_bdevs_operational": 3, 00:08:00.281 "base_bdevs_list": [ 00:08:00.281 { 00:08:00.281 "name": "BaseBdev1", 00:08:00.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.281 "is_configured": false, 00:08:00.281 "data_offset": 0, 00:08:00.281 "data_size": 0 00:08:00.281 }, 00:08:00.281 { 00:08:00.281 "name": "BaseBdev2", 00:08:00.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.281 "is_configured": false, 00:08:00.281 "data_offset": 0, 00:08:00.281 "data_size": 0 00:08:00.281 }, 00:08:00.281 { 00:08:00.281 "name": "BaseBdev3", 00:08:00.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.281 "is_configured": false, 00:08:00.281 "data_offset": 0, 00:08:00.281 "data_size": 0 00:08:00.281 } 00:08:00.281 ] 00:08:00.281 }' 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.281 18:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.850 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.850 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.850 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.850 [2024-11-16 18:48:44.040076] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.850 [2024-11-16 18:48:44.040152] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.851 [2024-11-16 18:48:44.052061] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.851 [2024-11-16 18:48:44.052140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.851 [2024-11-16 18:48:44.052166] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.851 [2024-11-16 18:48:44.052188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.851 [2024-11-16 18:48:44.052206] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:00.851 [2024-11-16 18:48:44.052227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.851 [2024-11-16 18:48:44.099552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.851 BaseBdev1 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.851 [ 00:08:00.851 { 00:08:00.851 "name": "BaseBdev1", 00:08:00.851 "aliases": [ 00:08:00.851 "b12fc7e8-a202-4b8a-bde3-c8defd9521ee" 00:08:00.851 ], 00:08:00.851 "product_name": "Malloc disk", 00:08:00.851 "block_size": 512, 00:08:00.851 "num_blocks": 65536, 00:08:00.851 "uuid": "b12fc7e8-a202-4b8a-bde3-c8defd9521ee", 00:08:00.851 "assigned_rate_limits": { 00:08:00.851 "rw_ios_per_sec": 0, 00:08:00.851 "rw_mbytes_per_sec": 0, 00:08:00.851 "r_mbytes_per_sec": 0, 00:08:00.851 "w_mbytes_per_sec": 0 00:08:00.851 }, 00:08:00.851 "claimed": true, 00:08:00.851 "claim_type": "exclusive_write", 00:08:00.851 "zoned": false, 00:08:00.851 "supported_io_types": { 00:08:00.851 "read": true, 00:08:00.851 "write": true, 00:08:00.851 "unmap": true, 00:08:00.851 "flush": true, 00:08:00.851 "reset": true, 00:08:00.851 "nvme_admin": false, 00:08:00.851 "nvme_io": false, 00:08:00.851 "nvme_io_md": false, 00:08:00.851 "write_zeroes": true, 00:08:00.851 "zcopy": true, 00:08:00.851 "get_zone_info": false, 00:08:00.851 "zone_management": false, 00:08:00.851 "zone_append": false, 00:08:00.851 "compare": false, 00:08:00.851 "compare_and_write": false, 00:08:00.851 "abort": true, 00:08:00.851 "seek_hole": false, 00:08:00.851 "seek_data": false, 00:08:00.851 "copy": true, 00:08:00.851 "nvme_iov_md": false 00:08:00.851 }, 00:08:00.851 "memory_domains": [ 00:08:00.851 { 00:08:00.851 "dma_device_id": "system", 00:08:00.851 "dma_device_type": 1 00:08:00.851 }, 00:08:00.851 { 00:08:00.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.851 "dma_device_type": 2 00:08:00.851 } 00:08:00.851 ], 00:08:00.851 "driver_specific": {} 00:08:00.851 } 00:08:00.851 ] 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.851 "name": "Existed_Raid", 00:08:00.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.851 "strip_size_kb": 64, 00:08:00.851 "state": "configuring", 00:08:00.851 "raid_level": "raid0", 00:08:00.851 "superblock": false, 00:08:00.851 "num_base_bdevs": 3, 00:08:00.851 "num_base_bdevs_discovered": 1, 00:08:00.851 "num_base_bdevs_operational": 3, 00:08:00.851 "base_bdevs_list": [ 00:08:00.851 { 00:08:00.851 "name": "BaseBdev1", 00:08:00.851 "uuid": "b12fc7e8-a202-4b8a-bde3-c8defd9521ee", 00:08:00.851 "is_configured": true, 00:08:00.851 "data_offset": 0, 00:08:00.851 "data_size": 65536 00:08:00.851 }, 00:08:00.851 { 00:08:00.851 "name": "BaseBdev2", 00:08:00.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.851 "is_configured": false, 00:08:00.851 "data_offset": 0, 00:08:00.851 "data_size": 0 00:08:00.851 }, 00:08:00.851 { 00:08:00.851 "name": "BaseBdev3", 00:08:00.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.851 "is_configured": false, 00:08:00.851 "data_offset": 0, 00:08:00.851 "data_size": 0 00:08:00.851 } 00:08:00.851 ] 00:08:00.851 }' 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.851 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.111 [2024-11-16 18:48:44.530826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.111 [2024-11-16 18:48:44.530926] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.111 [2024-11-16 18:48:44.542852] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.111 [2024-11-16 18:48:44.544574] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.111 [2024-11-16 18:48:44.544615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.111 [2024-11-16 18:48:44.544624] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:01.111 [2024-11-16 18:48:44.544632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.111 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.112 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.112 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.371 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.371 "name": "Existed_Raid", 00:08:01.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.372 "strip_size_kb": 64, 00:08:01.372 "state": "configuring", 00:08:01.372 "raid_level": "raid0", 00:08:01.372 "superblock": false, 00:08:01.372 "num_base_bdevs": 3, 00:08:01.372 "num_base_bdevs_discovered": 1, 00:08:01.372 "num_base_bdevs_operational": 3, 00:08:01.372 "base_bdevs_list": [ 00:08:01.372 { 00:08:01.372 "name": "BaseBdev1", 00:08:01.372 "uuid": "b12fc7e8-a202-4b8a-bde3-c8defd9521ee", 00:08:01.372 "is_configured": true, 00:08:01.372 "data_offset": 0, 00:08:01.372 "data_size": 65536 00:08:01.372 }, 00:08:01.372 { 00:08:01.372 "name": "BaseBdev2", 00:08:01.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.372 "is_configured": false, 00:08:01.372 "data_offset": 0, 00:08:01.372 "data_size": 0 00:08:01.372 }, 00:08:01.372 { 00:08:01.372 "name": "BaseBdev3", 00:08:01.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.372 "is_configured": false, 00:08:01.372 "data_offset": 0, 00:08:01.372 "data_size": 0 00:08:01.372 } 00:08:01.372 ] 00:08:01.372 }' 00:08:01.372 18:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.372 18:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.632 [2024-11-16 18:48:45.042889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:01.632 BaseBdev2 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.632 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.632 [ 00:08:01.632 { 00:08:01.632 "name": "BaseBdev2", 00:08:01.632 "aliases": [ 00:08:01.632 "075940bf-1ac8-489d-9348-a60372730e2a" 00:08:01.632 ], 00:08:01.632 "product_name": "Malloc disk", 00:08:01.632 "block_size": 512, 00:08:01.632 "num_blocks": 65536, 00:08:01.632 "uuid": "075940bf-1ac8-489d-9348-a60372730e2a", 00:08:01.632 "assigned_rate_limits": { 00:08:01.632 "rw_ios_per_sec": 0, 00:08:01.632 "rw_mbytes_per_sec": 0, 00:08:01.632 "r_mbytes_per_sec": 0, 00:08:01.632 "w_mbytes_per_sec": 0 00:08:01.632 }, 00:08:01.632 "claimed": true, 00:08:01.632 "claim_type": "exclusive_write", 00:08:01.632 "zoned": false, 00:08:01.632 "supported_io_types": { 00:08:01.632 "read": true, 00:08:01.632 "write": true, 00:08:01.632 "unmap": true, 00:08:01.632 "flush": true, 00:08:01.632 "reset": true, 00:08:01.632 "nvme_admin": false, 00:08:01.632 "nvme_io": false, 00:08:01.632 "nvme_io_md": false, 00:08:01.632 "write_zeroes": true, 00:08:01.632 "zcopy": true, 00:08:01.632 "get_zone_info": false, 00:08:01.632 "zone_management": false, 00:08:01.632 "zone_append": false, 00:08:01.632 "compare": false, 00:08:01.632 "compare_and_write": false, 00:08:01.632 "abort": true, 00:08:01.633 "seek_hole": false, 00:08:01.633 "seek_data": false, 00:08:01.633 "copy": true, 00:08:01.633 "nvme_iov_md": false 00:08:01.633 }, 00:08:01.633 "memory_domains": [ 00:08:01.633 { 00:08:01.633 "dma_device_id": "system", 00:08:01.633 "dma_device_type": 1 00:08:01.633 }, 00:08:01.633 { 00:08:01.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.633 "dma_device_type": 2 00:08:01.633 } 00:08:01.633 ], 00:08:01.633 "driver_specific": {} 00:08:01.633 } 00:08:01.633 ] 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.633 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.893 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.893 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.893 "name": "Existed_Raid", 00:08:01.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.893 "strip_size_kb": 64, 00:08:01.893 "state": "configuring", 00:08:01.893 "raid_level": "raid0", 00:08:01.893 "superblock": false, 00:08:01.893 "num_base_bdevs": 3, 00:08:01.893 "num_base_bdevs_discovered": 2, 00:08:01.893 "num_base_bdevs_operational": 3, 00:08:01.893 "base_bdevs_list": [ 00:08:01.893 { 00:08:01.893 "name": "BaseBdev1", 00:08:01.893 "uuid": "b12fc7e8-a202-4b8a-bde3-c8defd9521ee", 00:08:01.893 "is_configured": true, 00:08:01.893 "data_offset": 0, 00:08:01.893 "data_size": 65536 00:08:01.893 }, 00:08:01.893 { 00:08:01.893 "name": "BaseBdev2", 00:08:01.893 "uuid": "075940bf-1ac8-489d-9348-a60372730e2a", 00:08:01.893 "is_configured": true, 00:08:01.893 "data_offset": 0, 00:08:01.893 "data_size": 65536 00:08:01.893 }, 00:08:01.893 { 00:08:01.893 "name": "BaseBdev3", 00:08:01.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.893 "is_configured": false, 00:08:01.893 "data_offset": 0, 00:08:01.893 "data_size": 0 00:08:01.893 } 00:08:01.893 ] 00:08:01.893 }' 00:08:01.893 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.893 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.153 [2024-11-16 18:48:45.585028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:02.153 [2024-11-16 18:48:45.585133] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.153 [2024-11-16 18:48:45.585152] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:02.153 [2024-11-16 18:48:45.585426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:02.153 [2024-11-16 18:48:45.585588] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.153 [2024-11-16 18:48:45.585597] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:02.153 [2024-11-16 18:48:45.585902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.153 BaseBdev3 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.153 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.153 [ 00:08:02.153 { 00:08:02.153 "name": "BaseBdev3", 00:08:02.153 "aliases": [ 00:08:02.153 "a5cb1f70-4a7a-4c0d-90e6-70ab187ba556" 00:08:02.153 ], 00:08:02.153 "product_name": "Malloc disk", 00:08:02.153 "block_size": 512, 00:08:02.153 "num_blocks": 65536, 00:08:02.153 "uuid": "a5cb1f70-4a7a-4c0d-90e6-70ab187ba556", 00:08:02.153 "assigned_rate_limits": { 00:08:02.153 "rw_ios_per_sec": 0, 00:08:02.153 "rw_mbytes_per_sec": 0, 00:08:02.153 "r_mbytes_per_sec": 0, 00:08:02.153 "w_mbytes_per_sec": 0 00:08:02.153 }, 00:08:02.153 "claimed": true, 00:08:02.153 "claim_type": "exclusive_write", 00:08:02.153 "zoned": false, 00:08:02.153 "supported_io_types": { 00:08:02.153 "read": true, 00:08:02.153 "write": true, 00:08:02.153 "unmap": true, 00:08:02.153 "flush": true, 00:08:02.153 "reset": true, 00:08:02.153 "nvme_admin": false, 00:08:02.153 "nvme_io": false, 00:08:02.153 "nvme_io_md": false, 00:08:02.153 "write_zeroes": true, 00:08:02.153 "zcopy": true, 00:08:02.153 "get_zone_info": false, 00:08:02.153 "zone_management": false, 00:08:02.153 "zone_append": false, 00:08:02.153 "compare": false, 00:08:02.153 "compare_and_write": false, 00:08:02.153 "abort": true, 00:08:02.153 "seek_hole": false, 00:08:02.153 "seek_data": false, 00:08:02.153 "copy": true, 00:08:02.153 "nvme_iov_md": false 00:08:02.153 }, 00:08:02.413 "memory_domains": [ 00:08:02.413 { 00:08:02.413 "dma_device_id": "system", 00:08:02.413 "dma_device_type": 1 00:08:02.413 }, 00:08:02.413 { 00:08:02.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.413 "dma_device_type": 2 00:08:02.413 } 00:08:02.413 ], 00:08:02.413 "driver_specific": {} 00:08:02.413 } 00:08:02.413 ] 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.413 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.414 "name": "Existed_Raid", 00:08:02.414 "uuid": "a838ec0a-6441-4576-8ddc-ba97b007070f", 00:08:02.414 "strip_size_kb": 64, 00:08:02.414 "state": "online", 00:08:02.414 "raid_level": "raid0", 00:08:02.414 "superblock": false, 00:08:02.414 "num_base_bdevs": 3, 00:08:02.414 "num_base_bdevs_discovered": 3, 00:08:02.414 "num_base_bdevs_operational": 3, 00:08:02.414 "base_bdevs_list": [ 00:08:02.414 { 00:08:02.414 "name": "BaseBdev1", 00:08:02.414 "uuid": "b12fc7e8-a202-4b8a-bde3-c8defd9521ee", 00:08:02.414 "is_configured": true, 00:08:02.414 "data_offset": 0, 00:08:02.414 "data_size": 65536 00:08:02.414 }, 00:08:02.414 { 00:08:02.414 "name": "BaseBdev2", 00:08:02.414 "uuid": "075940bf-1ac8-489d-9348-a60372730e2a", 00:08:02.414 "is_configured": true, 00:08:02.414 "data_offset": 0, 00:08:02.414 "data_size": 65536 00:08:02.414 }, 00:08:02.414 { 00:08:02.414 "name": "BaseBdev3", 00:08:02.414 "uuid": "a5cb1f70-4a7a-4c0d-90e6-70ab187ba556", 00:08:02.414 "is_configured": true, 00:08:02.414 "data_offset": 0, 00:08:02.414 "data_size": 65536 00:08:02.414 } 00:08:02.414 ] 00:08:02.414 }' 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.414 18:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.674 [2024-11-16 18:48:46.024600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.674 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:02.674 "name": "Existed_Raid", 00:08:02.674 "aliases": [ 00:08:02.674 "a838ec0a-6441-4576-8ddc-ba97b007070f" 00:08:02.674 ], 00:08:02.674 "product_name": "Raid Volume", 00:08:02.674 "block_size": 512, 00:08:02.674 "num_blocks": 196608, 00:08:02.674 "uuid": "a838ec0a-6441-4576-8ddc-ba97b007070f", 00:08:02.674 "assigned_rate_limits": { 00:08:02.674 "rw_ios_per_sec": 0, 00:08:02.674 "rw_mbytes_per_sec": 0, 00:08:02.675 "r_mbytes_per_sec": 0, 00:08:02.675 "w_mbytes_per_sec": 0 00:08:02.675 }, 00:08:02.675 "claimed": false, 00:08:02.675 "zoned": false, 00:08:02.675 "supported_io_types": { 00:08:02.675 "read": true, 00:08:02.675 "write": true, 00:08:02.675 "unmap": true, 00:08:02.675 "flush": true, 00:08:02.675 "reset": true, 00:08:02.675 "nvme_admin": false, 00:08:02.675 "nvme_io": false, 00:08:02.675 "nvme_io_md": false, 00:08:02.675 "write_zeroes": true, 00:08:02.675 "zcopy": false, 00:08:02.675 "get_zone_info": false, 00:08:02.675 "zone_management": false, 00:08:02.675 "zone_append": false, 00:08:02.675 "compare": false, 00:08:02.675 "compare_and_write": false, 00:08:02.675 "abort": false, 00:08:02.675 "seek_hole": false, 00:08:02.675 "seek_data": false, 00:08:02.675 "copy": false, 00:08:02.675 "nvme_iov_md": false 00:08:02.675 }, 00:08:02.675 "memory_domains": [ 00:08:02.675 { 00:08:02.675 "dma_device_id": "system", 00:08:02.675 "dma_device_type": 1 00:08:02.675 }, 00:08:02.675 { 00:08:02.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.675 "dma_device_type": 2 00:08:02.675 }, 00:08:02.675 { 00:08:02.675 "dma_device_id": "system", 00:08:02.675 "dma_device_type": 1 00:08:02.675 }, 00:08:02.675 { 00:08:02.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.675 "dma_device_type": 2 00:08:02.675 }, 00:08:02.675 { 00:08:02.675 "dma_device_id": "system", 00:08:02.675 "dma_device_type": 1 00:08:02.675 }, 00:08:02.675 { 00:08:02.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.675 "dma_device_type": 2 00:08:02.675 } 00:08:02.675 ], 00:08:02.675 "driver_specific": { 00:08:02.675 "raid": { 00:08:02.675 "uuid": "a838ec0a-6441-4576-8ddc-ba97b007070f", 00:08:02.675 "strip_size_kb": 64, 00:08:02.675 "state": "online", 00:08:02.675 "raid_level": "raid0", 00:08:02.675 "superblock": false, 00:08:02.675 "num_base_bdevs": 3, 00:08:02.675 "num_base_bdevs_discovered": 3, 00:08:02.675 "num_base_bdevs_operational": 3, 00:08:02.675 "base_bdevs_list": [ 00:08:02.675 { 00:08:02.675 "name": "BaseBdev1", 00:08:02.675 "uuid": "b12fc7e8-a202-4b8a-bde3-c8defd9521ee", 00:08:02.675 "is_configured": true, 00:08:02.675 "data_offset": 0, 00:08:02.675 "data_size": 65536 00:08:02.675 }, 00:08:02.675 { 00:08:02.675 "name": "BaseBdev2", 00:08:02.675 "uuid": "075940bf-1ac8-489d-9348-a60372730e2a", 00:08:02.675 "is_configured": true, 00:08:02.675 "data_offset": 0, 00:08:02.675 "data_size": 65536 00:08:02.675 }, 00:08:02.675 { 00:08:02.675 "name": "BaseBdev3", 00:08:02.675 "uuid": "a5cb1f70-4a7a-4c0d-90e6-70ab187ba556", 00:08:02.675 "is_configured": true, 00:08:02.675 "data_offset": 0, 00:08:02.675 "data_size": 65536 00:08:02.675 } 00:08:02.675 ] 00:08:02.675 } 00:08:02.675 } 00:08:02.675 }' 00:08:02.675 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:02.675 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:02.675 BaseBdev2 00:08:02.675 BaseBdev3' 00:08:02.675 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.935 [2024-11-16 18:48:46.283900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:02.935 [2024-11-16 18:48:46.283967] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.935 [2024-11-16 18:48:46.284020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:02.935 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.936 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.195 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.195 "name": "Existed_Raid", 00:08:03.195 "uuid": "a838ec0a-6441-4576-8ddc-ba97b007070f", 00:08:03.196 "strip_size_kb": 64, 00:08:03.196 "state": "offline", 00:08:03.196 "raid_level": "raid0", 00:08:03.196 "superblock": false, 00:08:03.196 "num_base_bdevs": 3, 00:08:03.196 "num_base_bdevs_discovered": 2, 00:08:03.196 "num_base_bdevs_operational": 2, 00:08:03.196 "base_bdevs_list": [ 00:08:03.196 { 00:08:03.196 "name": null, 00:08:03.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.196 "is_configured": false, 00:08:03.196 "data_offset": 0, 00:08:03.196 "data_size": 65536 00:08:03.196 }, 00:08:03.196 { 00:08:03.196 "name": "BaseBdev2", 00:08:03.196 "uuid": "075940bf-1ac8-489d-9348-a60372730e2a", 00:08:03.196 "is_configured": true, 00:08:03.196 "data_offset": 0, 00:08:03.196 "data_size": 65536 00:08:03.196 }, 00:08:03.196 { 00:08:03.196 "name": "BaseBdev3", 00:08:03.196 "uuid": "a5cb1f70-4a7a-4c0d-90e6-70ab187ba556", 00:08:03.196 "is_configured": true, 00:08:03.196 "data_offset": 0, 00:08:03.196 "data_size": 65536 00:08:03.196 } 00:08:03.196 ] 00:08:03.196 }' 00:08:03.196 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.196 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.456 [2024-11-16 18:48:46.820807] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.456 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:03.457 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.457 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.457 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.717 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.717 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:03.717 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:03.717 18:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:03.717 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.717 18:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.717 [2024-11-16 18:48:46.970601] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:03.717 [2024-11-16 18:48:46.970658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.717 BaseBdev2 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.717 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.717 [ 00:08:03.717 { 00:08:03.718 "name": "BaseBdev2", 00:08:03.718 "aliases": [ 00:08:03.718 "1d191588-edee-499e-b59d-b26072e2f30e" 00:08:03.718 ], 00:08:03.718 "product_name": "Malloc disk", 00:08:03.718 "block_size": 512, 00:08:03.718 "num_blocks": 65536, 00:08:03.718 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:03.718 "assigned_rate_limits": { 00:08:03.718 "rw_ios_per_sec": 0, 00:08:03.718 "rw_mbytes_per_sec": 0, 00:08:03.718 "r_mbytes_per_sec": 0, 00:08:03.718 "w_mbytes_per_sec": 0 00:08:03.718 }, 00:08:03.718 "claimed": false, 00:08:03.718 "zoned": false, 00:08:03.718 "supported_io_types": { 00:08:03.718 "read": true, 00:08:03.718 "write": true, 00:08:03.718 "unmap": true, 00:08:03.718 "flush": true, 00:08:03.718 "reset": true, 00:08:03.718 "nvme_admin": false, 00:08:03.718 "nvme_io": false, 00:08:03.718 "nvme_io_md": false, 00:08:03.718 "write_zeroes": true, 00:08:03.718 "zcopy": true, 00:08:03.718 "get_zone_info": false, 00:08:03.718 "zone_management": false, 00:08:03.718 "zone_append": false, 00:08:03.718 "compare": false, 00:08:03.718 "compare_and_write": false, 00:08:03.718 "abort": true, 00:08:03.718 "seek_hole": false, 00:08:03.718 "seek_data": false, 00:08:03.718 "copy": true, 00:08:03.718 "nvme_iov_md": false 00:08:03.718 }, 00:08:03.718 "memory_domains": [ 00:08:03.718 { 00:08:03.718 "dma_device_id": "system", 00:08:03.718 "dma_device_type": 1 00:08:03.718 }, 00:08:03.718 { 00:08:03.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.718 "dma_device_type": 2 00:08:03.718 } 00:08:03.718 ], 00:08:03.718 "driver_specific": {} 00:08:03.718 } 00:08:03.718 ] 00:08:03.718 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.718 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.718 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:03.718 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:03.718 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:03.718 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.718 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.978 BaseBdev3 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.978 [ 00:08:03.978 { 00:08:03.978 "name": "BaseBdev3", 00:08:03.978 "aliases": [ 00:08:03.978 "0b9902b7-edc5-4144-bb52-8467f6c4f1ba" 00:08:03.978 ], 00:08:03.978 "product_name": "Malloc disk", 00:08:03.978 "block_size": 512, 00:08:03.978 "num_blocks": 65536, 00:08:03.978 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:03.978 "assigned_rate_limits": { 00:08:03.978 "rw_ios_per_sec": 0, 00:08:03.978 "rw_mbytes_per_sec": 0, 00:08:03.978 "r_mbytes_per_sec": 0, 00:08:03.978 "w_mbytes_per_sec": 0 00:08:03.978 }, 00:08:03.978 "claimed": false, 00:08:03.978 "zoned": false, 00:08:03.978 "supported_io_types": { 00:08:03.978 "read": true, 00:08:03.978 "write": true, 00:08:03.978 "unmap": true, 00:08:03.978 "flush": true, 00:08:03.978 "reset": true, 00:08:03.978 "nvme_admin": false, 00:08:03.978 "nvme_io": false, 00:08:03.978 "nvme_io_md": false, 00:08:03.978 "write_zeroes": true, 00:08:03.978 "zcopy": true, 00:08:03.978 "get_zone_info": false, 00:08:03.978 "zone_management": false, 00:08:03.978 "zone_append": false, 00:08:03.978 "compare": false, 00:08:03.978 "compare_and_write": false, 00:08:03.978 "abort": true, 00:08:03.978 "seek_hole": false, 00:08:03.978 "seek_data": false, 00:08:03.978 "copy": true, 00:08:03.978 "nvme_iov_md": false 00:08:03.978 }, 00:08:03.978 "memory_domains": [ 00:08:03.978 { 00:08:03.978 "dma_device_id": "system", 00:08:03.978 "dma_device_type": 1 00:08:03.978 }, 00:08:03.978 { 00:08:03.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.978 "dma_device_type": 2 00:08:03.978 } 00:08:03.978 ], 00:08:03.978 "driver_specific": {} 00:08:03.978 } 00:08:03.978 ] 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.978 [2024-11-16 18:48:47.266409] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:03.978 [2024-11-16 18:48:47.266515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:03.978 [2024-11-16 18:48:47.266560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.978 [2024-11-16 18:48:47.268378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:03.978 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:03.979 "name": "Existed_Raid", 00:08:03.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.979 "strip_size_kb": 64, 00:08:03.979 "state": "configuring", 00:08:03.979 "raid_level": "raid0", 00:08:03.979 "superblock": false, 00:08:03.979 "num_base_bdevs": 3, 00:08:03.979 "num_base_bdevs_discovered": 2, 00:08:03.979 "num_base_bdevs_operational": 3, 00:08:03.979 "base_bdevs_list": [ 00:08:03.979 { 00:08:03.979 "name": "BaseBdev1", 00:08:03.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.979 "is_configured": false, 00:08:03.979 "data_offset": 0, 00:08:03.979 "data_size": 0 00:08:03.979 }, 00:08:03.979 { 00:08:03.979 "name": "BaseBdev2", 00:08:03.979 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:03.979 "is_configured": true, 00:08:03.979 "data_offset": 0, 00:08:03.979 "data_size": 65536 00:08:03.979 }, 00:08:03.979 { 00:08:03.979 "name": "BaseBdev3", 00:08:03.979 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:03.979 "is_configured": true, 00:08:03.979 "data_offset": 0, 00:08:03.979 "data_size": 65536 00:08:03.979 } 00:08:03.979 ] 00:08:03.979 }' 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:03.979 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.239 [2024-11-16 18:48:47.653783] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.239 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.499 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.499 "name": "Existed_Raid", 00:08:04.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.499 "strip_size_kb": 64, 00:08:04.499 "state": "configuring", 00:08:04.499 "raid_level": "raid0", 00:08:04.499 "superblock": false, 00:08:04.499 "num_base_bdevs": 3, 00:08:04.499 "num_base_bdevs_discovered": 1, 00:08:04.499 "num_base_bdevs_operational": 3, 00:08:04.499 "base_bdevs_list": [ 00:08:04.499 { 00:08:04.499 "name": "BaseBdev1", 00:08:04.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.499 "is_configured": false, 00:08:04.499 "data_offset": 0, 00:08:04.499 "data_size": 0 00:08:04.499 }, 00:08:04.499 { 00:08:04.499 "name": null, 00:08:04.499 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:04.499 "is_configured": false, 00:08:04.499 "data_offset": 0, 00:08:04.499 "data_size": 65536 00:08:04.499 }, 00:08:04.499 { 00:08:04.499 "name": "BaseBdev3", 00:08:04.499 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:04.499 "is_configured": true, 00:08:04.499 "data_offset": 0, 00:08:04.499 "data_size": 65536 00:08:04.499 } 00:08:04.499 ] 00:08:04.499 }' 00:08:04.499 18:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.499 18:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.760 [2024-11-16 18:48:48.187993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.760 BaseBdev1 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.760 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.760 [ 00:08:04.760 { 00:08:04.760 "name": "BaseBdev1", 00:08:04.760 "aliases": [ 00:08:04.760 "d94ad210-7c0c-4262-a39b-e64883116c09" 00:08:04.760 ], 00:08:04.760 "product_name": "Malloc disk", 00:08:04.760 "block_size": 512, 00:08:04.760 "num_blocks": 65536, 00:08:04.760 "uuid": "d94ad210-7c0c-4262-a39b-e64883116c09", 00:08:04.760 "assigned_rate_limits": { 00:08:04.760 "rw_ios_per_sec": 0, 00:08:04.760 "rw_mbytes_per_sec": 0, 00:08:04.760 "r_mbytes_per_sec": 0, 00:08:04.760 "w_mbytes_per_sec": 0 00:08:04.760 }, 00:08:04.760 "claimed": true, 00:08:04.760 "claim_type": "exclusive_write", 00:08:04.760 "zoned": false, 00:08:04.760 "supported_io_types": { 00:08:04.760 "read": true, 00:08:04.760 "write": true, 00:08:04.760 "unmap": true, 00:08:04.760 "flush": true, 00:08:04.760 "reset": true, 00:08:04.760 "nvme_admin": false, 00:08:04.760 "nvme_io": false, 00:08:04.760 "nvme_io_md": false, 00:08:04.760 "write_zeroes": true, 00:08:04.760 "zcopy": true, 00:08:04.760 "get_zone_info": false, 00:08:04.760 "zone_management": false, 00:08:04.760 "zone_append": false, 00:08:04.760 "compare": false, 00:08:04.760 "compare_and_write": false, 00:08:04.761 "abort": true, 00:08:04.761 "seek_hole": false, 00:08:04.761 "seek_data": false, 00:08:04.761 "copy": true, 00:08:04.761 "nvme_iov_md": false 00:08:04.761 }, 00:08:04.761 "memory_domains": [ 00:08:04.761 { 00:08:04.761 "dma_device_id": "system", 00:08:04.761 "dma_device_type": 1 00:08:04.761 }, 00:08:04.761 { 00:08:04.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.761 "dma_device_type": 2 00:08:04.761 } 00:08:04.761 ], 00:08:04.761 "driver_specific": {} 00:08:04.761 } 00:08:04.761 ] 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.761 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.022 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.022 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.022 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.022 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.022 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.022 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.022 "name": "Existed_Raid", 00:08:05.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.022 "strip_size_kb": 64, 00:08:05.022 "state": "configuring", 00:08:05.022 "raid_level": "raid0", 00:08:05.022 "superblock": false, 00:08:05.022 "num_base_bdevs": 3, 00:08:05.022 "num_base_bdevs_discovered": 2, 00:08:05.022 "num_base_bdevs_operational": 3, 00:08:05.022 "base_bdevs_list": [ 00:08:05.022 { 00:08:05.022 "name": "BaseBdev1", 00:08:05.022 "uuid": "d94ad210-7c0c-4262-a39b-e64883116c09", 00:08:05.022 "is_configured": true, 00:08:05.022 "data_offset": 0, 00:08:05.022 "data_size": 65536 00:08:05.022 }, 00:08:05.022 { 00:08:05.022 "name": null, 00:08:05.022 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:05.022 "is_configured": false, 00:08:05.022 "data_offset": 0, 00:08:05.022 "data_size": 65536 00:08:05.022 }, 00:08:05.022 { 00:08:05.022 "name": "BaseBdev3", 00:08:05.022 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:05.022 "is_configured": true, 00:08:05.022 "data_offset": 0, 00:08:05.022 "data_size": 65536 00:08:05.022 } 00:08:05.022 ] 00:08:05.022 }' 00:08:05.022 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.022 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.282 [2024-11-16 18:48:48.647258] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.282 "name": "Existed_Raid", 00:08:05.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.282 "strip_size_kb": 64, 00:08:05.282 "state": "configuring", 00:08:05.282 "raid_level": "raid0", 00:08:05.282 "superblock": false, 00:08:05.282 "num_base_bdevs": 3, 00:08:05.282 "num_base_bdevs_discovered": 1, 00:08:05.282 "num_base_bdevs_operational": 3, 00:08:05.282 "base_bdevs_list": [ 00:08:05.282 { 00:08:05.282 "name": "BaseBdev1", 00:08:05.282 "uuid": "d94ad210-7c0c-4262-a39b-e64883116c09", 00:08:05.282 "is_configured": true, 00:08:05.282 "data_offset": 0, 00:08:05.282 "data_size": 65536 00:08:05.282 }, 00:08:05.282 { 00:08:05.282 "name": null, 00:08:05.282 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:05.282 "is_configured": false, 00:08:05.282 "data_offset": 0, 00:08:05.282 "data_size": 65536 00:08:05.282 }, 00:08:05.282 { 00:08:05.282 "name": null, 00:08:05.282 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:05.282 "is_configured": false, 00:08:05.282 "data_offset": 0, 00:08:05.282 "data_size": 65536 00:08:05.282 } 00:08:05.282 ] 00:08:05.282 }' 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.282 18:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.852 [2024-11-16 18:48:49.114479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:05.852 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.853 "name": "Existed_Raid", 00:08:05.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.853 "strip_size_kb": 64, 00:08:05.853 "state": "configuring", 00:08:05.853 "raid_level": "raid0", 00:08:05.853 "superblock": false, 00:08:05.853 "num_base_bdevs": 3, 00:08:05.853 "num_base_bdevs_discovered": 2, 00:08:05.853 "num_base_bdevs_operational": 3, 00:08:05.853 "base_bdevs_list": [ 00:08:05.853 { 00:08:05.853 "name": "BaseBdev1", 00:08:05.853 "uuid": "d94ad210-7c0c-4262-a39b-e64883116c09", 00:08:05.853 "is_configured": true, 00:08:05.853 "data_offset": 0, 00:08:05.853 "data_size": 65536 00:08:05.853 }, 00:08:05.853 { 00:08:05.853 "name": null, 00:08:05.853 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:05.853 "is_configured": false, 00:08:05.853 "data_offset": 0, 00:08:05.853 "data_size": 65536 00:08:05.853 }, 00:08:05.853 { 00:08:05.853 "name": "BaseBdev3", 00:08:05.853 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:05.853 "is_configured": true, 00:08:05.853 "data_offset": 0, 00:08:05.853 "data_size": 65536 00:08:05.853 } 00:08:05.853 ] 00:08:05.853 }' 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.853 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.113 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.113 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:06.113 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.113 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.113 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.113 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:06.113 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.113 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.113 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.113 [2024-11-16 18:48:49.573768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.373 "name": "Existed_Raid", 00:08:06.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.373 "strip_size_kb": 64, 00:08:06.373 "state": "configuring", 00:08:06.373 "raid_level": "raid0", 00:08:06.373 "superblock": false, 00:08:06.373 "num_base_bdevs": 3, 00:08:06.373 "num_base_bdevs_discovered": 1, 00:08:06.373 "num_base_bdevs_operational": 3, 00:08:06.373 "base_bdevs_list": [ 00:08:06.373 { 00:08:06.373 "name": null, 00:08:06.373 "uuid": "d94ad210-7c0c-4262-a39b-e64883116c09", 00:08:06.373 "is_configured": false, 00:08:06.373 "data_offset": 0, 00:08:06.373 "data_size": 65536 00:08:06.373 }, 00:08:06.373 { 00:08:06.373 "name": null, 00:08:06.373 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:06.373 "is_configured": false, 00:08:06.373 "data_offset": 0, 00:08:06.373 "data_size": 65536 00:08:06.373 }, 00:08:06.373 { 00:08:06.373 "name": "BaseBdev3", 00:08:06.373 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:06.373 "is_configured": true, 00:08:06.373 "data_offset": 0, 00:08:06.373 "data_size": 65536 00:08:06.373 } 00:08:06.373 ] 00:08:06.373 }' 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.373 18:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.942 [2024-11-16 18:48:50.153578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.942 "name": "Existed_Raid", 00:08:06.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.942 "strip_size_kb": 64, 00:08:06.942 "state": "configuring", 00:08:06.942 "raid_level": "raid0", 00:08:06.942 "superblock": false, 00:08:06.942 "num_base_bdevs": 3, 00:08:06.942 "num_base_bdevs_discovered": 2, 00:08:06.942 "num_base_bdevs_operational": 3, 00:08:06.942 "base_bdevs_list": [ 00:08:06.942 { 00:08:06.942 "name": null, 00:08:06.942 "uuid": "d94ad210-7c0c-4262-a39b-e64883116c09", 00:08:06.942 "is_configured": false, 00:08:06.942 "data_offset": 0, 00:08:06.942 "data_size": 65536 00:08:06.942 }, 00:08:06.942 { 00:08:06.942 "name": "BaseBdev2", 00:08:06.942 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:06.942 "is_configured": true, 00:08:06.942 "data_offset": 0, 00:08:06.942 "data_size": 65536 00:08:06.942 }, 00:08:06.942 { 00:08:06.942 "name": "BaseBdev3", 00:08:06.942 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:06.942 "is_configured": true, 00:08:06.942 "data_offset": 0, 00:08:06.942 "data_size": 65536 00:08:06.942 } 00:08:06.942 ] 00:08:06.942 }' 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.942 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.210 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d94ad210-7c0c-4262-a39b-e64883116c09 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.485 [2024-11-16 18:48:50.727741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:07.485 [2024-11-16 18:48:50.727776] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:07.485 [2024-11-16 18:48:50.727785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:07.485 [2024-11-16 18:48:50.728047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:07.485 [2024-11-16 18:48:50.728211] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:07.485 [2024-11-16 18:48:50.728220] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:07.485 [2024-11-16 18:48:50.728470] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:07.485 NewBaseBdev 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.485 [ 00:08:07.485 { 00:08:07.485 "name": "NewBaseBdev", 00:08:07.485 "aliases": [ 00:08:07.485 "d94ad210-7c0c-4262-a39b-e64883116c09" 00:08:07.485 ], 00:08:07.485 "product_name": "Malloc disk", 00:08:07.485 "block_size": 512, 00:08:07.485 "num_blocks": 65536, 00:08:07.485 "uuid": "d94ad210-7c0c-4262-a39b-e64883116c09", 00:08:07.485 "assigned_rate_limits": { 00:08:07.485 "rw_ios_per_sec": 0, 00:08:07.485 "rw_mbytes_per_sec": 0, 00:08:07.485 "r_mbytes_per_sec": 0, 00:08:07.485 "w_mbytes_per_sec": 0 00:08:07.485 }, 00:08:07.485 "claimed": true, 00:08:07.485 "claim_type": "exclusive_write", 00:08:07.485 "zoned": false, 00:08:07.485 "supported_io_types": { 00:08:07.485 "read": true, 00:08:07.485 "write": true, 00:08:07.485 "unmap": true, 00:08:07.485 "flush": true, 00:08:07.485 "reset": true, 00:08:07.485 "nvme_admin": false, 00:08:07.485 "nvme_io": false, 00:08:07.485 "nvme_io_md": false, 00:08:07.485 "write_zeroes": true, 00:08:07.485 "zcopy": true, 00:08:07.485 "get_zone_info": false, 00:08:07.485 "zone_management": false, 00:08:07.485 "zone_append": false, 00:08:07.485 "compare": false, 00:08:07.485 "compare_and_write": false, 00:08:07.485 "abort": true, 00:08:07.485 "seek_hole": false, 00:08:07.485 "seek_data": false, 00:08:07.485 "copy": true, 00:08:07.485 "nvme_iov_md": false 00:08:07.485 }, 00:08:07.485 "memory_domains": [ 00:08:07.485 { 00:08:07.485 "dma_device_id": "system", 00:08:07.485 "dma_device_type": 1 00:08:07.485 }, 00:08:07.485 { 00:08:07.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:07.485 "dma_device_type": 2 00:08:07.485 } 00:08:07.485 ], 00:08:07.485 "driver_specific": {} 00:08:07.485 } 00:08:07.485 ] 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.485 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.485 "name": "Existed_Raid", 00:08:07.486 "uuid": "28e5f5a4-6405-4a48-86b1-3bbd6659faa7", 00:08:07.486 "strip_size_kb": 64, 00:08:07.486 "state": "online", 00:08:07.486 "raid_level": "raid0", 00:08:07.486 "superblock": false, 00:08:07.486 "num_base_bdevs": 3, 00:08:07.486 "num_base_bdevs_discovered": 3, 00:08:07.486 "num_base_bdevs_operational": 3, 00:08:07.486 "base_bdevs_list": [ 00:08:07.486 { 00:08:07.486 "name": "NewBaseBdev", 00:08:07.486 "uuid": "d94ad210-7c0c-4262-a39b-e64883116c09", 00:08:07.486 "is_configured": true, 00:08:07.486 "data_offset": 0, 00:08:07.486 "data_size": 65536 00:08:07.486 }, 00:08:07.486 { 00:08:07.486 "name": "BaseBdev2", 00:08:07.486 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:07.486 "is_configured": true, 00:08:07.486 "data_offset": 0, 00:08:07.486 "data_size": 65536 00:08:07.486 }, 00:08:07.486 { 00:08:07.486 "name": "BaseBdev3", 00:08:07.486 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:07.486 "is_configured": true, 00:08:07.486 "data_offset": 0, 00:08:07.486 "data_size": 65536 00:08:07.486 } 00:08:07.486 ] 00:08:07.486 }' 00:08:07.486 18:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.486 18:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:08.056 [2024-11-16 18:48:51.251176] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:08.056 "name": "Existed_Raid", 00:08:08.056 "aliases": [ 00:08:08.056 "28e5f5a4-6405-4a48-86b1-3bbd6659faa7" 00:08:08.056 ], 00:08:08.056 "product_name": "Raid Volume", 00:08:08.056 "block_size": 512, 00:08:08.056 "num_blocks": 196608, 00:08:08.056 "uuid": "28e5f5a4-6405-4a48-86b1-3bbd6659faa7", 00:08:08.056 "assigned_rate_limits": { 00:08:08.056 "rw_ios_per_sec": 0, 00:08:08.056 "rw_mbytes_per_sec": 0, 00:08:08.056 "r_mbytes_per_sec": 0, 00:08:08.056 "w_mbytes_per_sec": 0 00:08:08.056 }, 00:08:08.056 "claimed": false, 00:08:08.056 "zoned": false, 00:08:08.056 "supported_io_types": { 00:08:08.056 "read": true, 00:08:08.056 "write": true, 00:08:08.056 "unmap": true, 00:08:08.056 "flush": true, 00:08:08.056 "reset": true, 00:08:08.056 "nvme_admin": false, 00:08:08.056 "nvme_io": false, 00:08:08.056 "nvme_io_md": false, 00:08:08.056 "write_zeroes": true, 00:08:08.056 "zcopy": false, 00:08:08.056 "get_zone_info": false, 00:08:08.056 "zone_management": false, 00:08:08.056 "zone_append": false, 00:08:08.056 "compare": false, 00:08:08.056 "compare_and_write": false, 00:08:08.056 "abort": false, 00:08:08.056 "seek_hole": false, 00:08:08.056 "seek_data": false, 00:08:08.056 "copy": false, 00:08:08.056 "nvme_iov_md": false 00:08:08.056 }, 00:08:08.056 "memory_domains": [ 00:08:08.056 { 00:08:08.056 "dma_device_id": "system", 00:08:08.056 "dma_device_type": 1 00:08:08.056 }, 00:08:08.056 { 00:08:08.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.056 "dma_device_type": 2 00:08:08.056 }, 00:08:08.056 { 00:08:08.056 "dma_device_id": "system", 00:08:08.056 "dma_device_type": 1 00:08:08.056 }, 00:08:08.056 { 00:08:08.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.056 "dma_device_type": 2 00:08:08.056 }, 00:08:08.056 { 00:08:08.056 "dma_device_id": "system", 00:08:08.056 "dma_device_type": 1 00:08:08.056 }, 00:08:08.056 { 00:08:08.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:08.056 "dma_device_type": 2 00:08:08.056 } 00:08:08.056 ], 00:08:08.056 "driver_specific": { 00:08:08.056 "raid": { 00:08:08.056 "uuid": "28e5f5a4-6405-4a48-86b1-3bbd6659faa7", 00:08:08.056 "strip_size_kb": 64, 00:08:08.056 "state": "online", 00:08:08.056 "raid_level": "raid0", 00:08:08.056 "superblock": false, 00:08:08.056 "num_base_bdevs": 3, 00:08:08.056 "num_base_bdevs_discovered": 3, 00:08:08.056 "num_base_bdevs_operational": 3, 00:08:08.056 "base_bdevs_list": [ 00:08:08.056 { 00:08:08.056 "name": "NewBaseBdev", 00:08:08.056 "uuid": "d94ad210-7c0c-4262-a39b-e64883116c09", 00:08:08.056 "is_configured": true, 00:08:08.056 "data_offset": 0, 00:08:08.056 "data_size": 65536 00:08:08.056 }, 00:08:08.056 { 00:08:08.056 "name": "BaseBdev2", 00:08:08.056 "uuid": "1d191588-edee-499e-b59d-b26072e2f30e", 00:08:08.056 "is_configured": true, 00:08:08.056 "data_offset": 0, 00:08:08.056 "data_size": 65536 00:08:08.056 }, 00:08:08.056 { 00:08:08.056 "name": "BaseBdev3", 00:08:08.056 "uuid": "0b9902b7-edc5-4144-bb52-8467f6c4f1ba", 00:08:08.056 "is_configured": true, 00:08:08.056 "data_offset": 0, 00:08:08.056 "data_size": 65536 00:08:08.056 } 00:08:08.056 ] 00:08:08.056 } 00:08:08.056 } 00:08:08.056 }' 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:08.056 BaseBdev2 00:08:08.056 BaseBdev3' 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:08.056 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.057 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.317 [2024-11-16 18:48:51.526417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:08.317 [2024-11-16 18:48:51.526442] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.317 [2024-11-16 18:48:51.526511] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.317 [2024-11-16 18:48:51.526559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.317 [2024-11-16 18:48:51.526571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63664 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63664 ']' 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63664 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63664 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63664' 00:08:08.317 killing process with pid 63664 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63664 00:08:08.317 [2024-11-16 18:48:51.574250] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.317 18:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63664 00:08:08.577 [2024-11-16 18:48:51.861223] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:09.517 00:08:09.517 real 0m10.222s 00:08:09.517 user 0m16.310s 00:08:09.517 sys 0m1.735s 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.517 ************************************ 00:08:09.517 END TEST raid_state_function_test 00:08:09.517 ************************************ 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.517 18:48:52 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:09.517 18:48:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:09.517 18:48:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.517 18:48:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.517 ************************************ 00:08:09.517 START TEST raid_state_function_test_sb 00:08:09.517 ************************************ 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:09.517 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:09.777 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64285 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64285' 00:08:09.778 Process raid pid: 64285 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64285 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64285 ']' 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:09.778 18:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:09.778 [2024-11-16 18:48:53.073976] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:09.778 [2024-11-16 18:48:53.074188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:10.037 [2024-11-16 18:48:53.251492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.037 [2024-11-16 18:48:53.361204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.297 [2024-11-16 18:48:53.560341] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.297 [2024-11-16 18:48:53.560456] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.557 [2024-11-16 18:48:53.912036] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:10.557 [2024-11-16 18:48:53.912130] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:10.557 [2024-11-16 18:48:53.912159] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:10.557 [2024-11-16 18:48:53.912183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:10.557 [2024-11-16 18:48:53.912200] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:10.557 [2024-11-16 18:48:53.912221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.557 "name": "Existed_Raid", 00:08:10.557 "uuid": "a49286a0-f172-4eb8-b7f6-1646f4f9800a", 00:08:10.557 "strip_size_kb": 64, 00:08:10.557 "state": "configuring", 00:08:10.557 "raid_level": "raid0", 00:08:10.557 "superblock": true, 00:08:10.557 "num_base_bdevs": 3, 00:08:10.557 "num_base_bdevs_discovered": 0, 00:08:10.557 "num_base_bdevs_operational": 3, 00:08:10.557 "base_bdevs_list": [ 00:08:10.557 { 00:08:10.557 "name": "BaseBdev1", 00:08:10.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.557 "is_configured": false, 00:08:10.557 "data_offset": 0, 00:08:10.557 "data_size": 0 00:08:10.557 }, 00:08:10.557 { 00:08:10.557 "name": "BaseBdev2", 00:08:10.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.557 "is_configured": false, 00:08:10.557 "data_offset": 0, 00:08:10.557 "data_size": 0 00:08:10.557 }, 00:08:10.557 { 00:08:10.557 "name": "BaseBdev3", 00:08:10.557 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:10.557 "is_configured": false, 00:08:10.557 "data_offset": 0, 00:08:10.557 "data_size": 0 00:08:10.557 } 00:08:10.557 ] 00:08:10.557 }' 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.557 18:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.128 [2024-11-16 18:48:54.331264] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.128 [2024-11-16 18:48:54.331300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.128 [2024-11-16 18:48:54.343240] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:11.128 [2024-11-16 18:48:54.343284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:11.128 [2024-11-16 18:48:54.343293] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.128 [2024-11-16 18:48:54.343302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.128 [2024-11-16 18:48:54.343307] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.128 [2024-11-16 18:48:54.343316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.128 [2024-11-16 18:48:54.389342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.128 BaseBdev1 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.128 [ 00:08:11.128 { 00:08:11.128 "name": "BaseBdev1", 00:08:11.128 "aliases": [ 00:08:11.128 "66bec237-bc11-416e-b937-56b41399a341" 00:08:11.128 ], 00:08:11.128 "product_name": "Malloc disk", 00:08:11.128 "block_size": 512, 00:08:11.128 "num_blocks": 65536, 00:08:11.128 "uuid": "66bec237-bc11-416e-b937-56b41399a341", 00:08:11.128 "assigned_rate_limits": { 00:08:11.128 "rw_ios_per_sec": 0, 00:08:11.128 "rw_mbytes_per_sec": 0, 00:08:11.128 "r_mbytes_per_sec": 0, 00:08:11.128 "w_mbytes_per_sec": 0 00:08:11.128 }, 00:08:11.128 "claimed": true, 00:08:11.128 "claim_type": "exclusive_write", 00:08:11.128 "zoned": false, 00:08:11.128 "supported_io_types": { 00:08:11.128 "read": true, 00:08:11.128 "write": true, 00:08:11.128 "unmap": true, 00:08:11.128 "flush": true, 00:08:11.128 "reset": true, 00:08:11.128 "nvme_admin": false, 00:08:11.128 "nvme_io": false, 00:08:11.128 "nvme_io_md": false, 00:08:11.128 "write_zeroes": true, 00:08:11.128 "zcopy": true, 00:08:11.128 "get_zone_info": false, 00:08:11.128 "zone_management": false, 00:08:11.128 "zone_append": false, 00:08:11.128 "compare": false, 00:08:11.128 "compare_and_write": false, 00:08:11.128 "abort": true, 00:08:11.128 "seek_hole": false, 00:08:11.128 "seek_data": false, 00:08:11.128 "copy": true, 00:08:11.128 "nvme_iov_md": false 00:08:11.128 }, 00:08:11.128 "memory_domains": [ 00:08:11.128 { 00:08:11.128 "dma_device_id": "system", 00:08:11.128 "dma_device_type": 1 00:08:11.128 }, 00:08:11.128 { 00:08:11.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.128 "dma_device_type": 2 00:08:11.128 } 00:08:11.128 ], 00:08:11.128 "driver_specific": {} 00:08:11.128 } 00:08:11.128 ] 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.128 "name": "Existed_Raid", 00:08:11.128 "uuid": "2428baae-a0c3-44b1-bf27-960312f008d1", 00:08:11.128 "strip_size_kb": 64, 00:08:11.128 "state": "configuring", 00:08:11.128 "raid_level": "raid0", 00:08:11.128 "superblock": true, 00:08:11.128 "num_base_bdevs": 3, 00:08:11.128 "num_base_bdevs_discovered": 1, 00:08:11.128 "num_base_bdevs_operational": 3, 00:08:11.128 "base_bdevs_list": [ 00:08:11.128 { 00:08:11.128 "name": "BaseBdev1", 00:08:11.128 "uuid": "66bec237-bc11-416e-b937-56b41399a341", 00:08:11.128 "is_configured": true, 00:08:11.128 "data_offset": 2048, 00:08:11.128 "data_size": 63488 00:08:11.128 }, 00:08:11.128 { 00:08:11.128 "name": "BaseBdev2", 00:08:11.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.128 "is_configured": false, 00:08:11.128 "data_offset": 0, 00:08:11.128 "data_size": 0 00:08:11.128 }, 00:08:11.128 { 00:08:11.128 "name": "BaseBdev3", 00:08:11.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.128 "is_configured": false, 00:08:11.128 "data_offset": 0, 00:08:11.128 "data_size": 0 00:08:11.128 } 00:08:11.128 ] 00:08:11.128 }' 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.128 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.698 [2024-11-16 18:48:54.864541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:11.698 [2024-11-16 18:48:54.864633] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.698 [2024-11-16 18:48:54.876572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.698 [2024-11-16 18:48:54.878372] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:11.698 [2024-11-16 18:48:54.878443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:11.698 [2024-11-16 18:48:54.878470] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:11.698 [2024-11-16 18:48:54.878492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.698 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.698 "name": "Existed_Raid", 00:08:11.698 "uuid": "807fe938-24fe-435d-acbf-b0612541ff0d", 00:08:11.698 "strip_size_kb": 64, 00:08:11.698 "state": "configuring", 00:08:11.698 "raid_level": "raid0", 00:08:11.698 "superblock": true, 00:08:11.698 "num_base_bdevs": 3, 00:08:11.698 "num_base_bdevs_discovered": 1, 00:08:11.698 "num_base_bdevs_operational": 3, 00:08:11.698 "base_bdevs_list": [ 00:08:11.698 { 00:08:11.698 "name": "BaseBdev1", 00:08:11.698 "uuid": "66bec237-bc11-416e-b937-56b41399a341", 00:08:11.698 "is_configured": true, 00:08:11.698 "data_offset": 2048, 00:08:11.698 "data_size": 63488 00:08:11.698 }, 00:08:11.698 { 00:08:11.698 "name": "BaseBdev2", 00:08:11.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.698 "is_configured": false, 00:08:11.698 "data_offset": 0, 00:08:11.698 "data_size": 0 00:08:11.698 }, 00:08:11.699 { 00:08:11.699 "name": "BaseBdev3", 00:08:11.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:11.699 "is_configured": false, 00:08:11.699 "data_offset": 0, 00:08:11.699 "data_size": 0 00:08:11.699 } 00:08:11.699 ] 00:08:11.699 }' 00:08:11.699 18:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.699 18:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.959 [2024-11-16 18:48:55.385627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.959 BaseBdev2 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:11.959 [ 00:08:11.959 { 00:08:11.959 "name": "BaseBdev2", 00:08:11.959 "aliases": [ 00:08:11.959 "bc943ea7-c697-44ee-8c09-537894495701" 00:08:11.959 ], 00:08:11.959 "product_name": "Malloc disk", 00:08:11.959 "block_size": 512, 00:08:11.959 "num_blocks": 65536, 00:08:11.959 "uuid": "bc943ea7-c697-44ee-8c09-537894495701", 00:08:11.959 "assigned_rate_limits": { 00:08:11.959 "rw_ios_per_sec": 0, 00:08:11.959 "rw_mbytes_per_sec": 0, 00:08:11.959 "r_mbytes_per_sec": 0, 00:08:11.959 "w_mbytes_per_sec": 0 00:08:11.959 }, 00:08:11.959 "claimed": true, 00:08:11.959 "claim_type": "exclusive_write", 00:08:11.959 "zoned": false, 00:08:11.959 "supported_io_types": { 00:08:11.959 "read": true, 00:08:11.959 "write": true, 00:08:11.959 "unmap": true, 00:08:11.959 "flush": true, 00:08:11.959 "reset": true, 00:08:11.959 "nvme_admin": false, 00:08:11.959 "nvme_io": false, 00:08:11.959 "nvme_io_md": false, 00:08:11.959 "write_zeroes": true, 00:08:11.959 "zcopy": true, 00:08:11.959 "get_zone_info": false, 00:08:11.959 "zone_management": false, 00:08:11.959 "zone_append": false, 00:08:11.959 "compare": false, 00:08:11.959 "compare_and_write": false, 00:08:11.959 "abort": true, 00:08:11.959 "seek_hole": false, 00:08:11.959 "seek_data": false, 00:08:11.959 "copy": true, 00:08:11.959 "nvme_iov_md": false 00:08:11.959 }, 00:08:11.959 "memory_domains": [ 00:08:11.959 { 00:08:11.959 "dma_device_id": "system", 00:08:11.959 "dma_device_type": 1 00:08:11.959 }, 00:08:11.959 { 00:08:11.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.959 "dma_device_type": 2 00:08:11.959 } 00:08:11.959 ], 00:08:11.959 "driver_specific": {} 00:08:11.959 } 00:08:11.959 ] 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.959 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.218 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.218 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.218 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.218 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.218 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.218 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.218 "name": "Existed_Raid", 00:08:12.218 "uuid": "807fe938-24fe-435d-acbf-b0612541ff0d", 00:08:12.218 "strip_size_kb": 64, 00:08:12.218 "state": "configuring", 00:08:12.218 "raid_level": "raid0", 00:08:12.218 "superblock": true, 00:08:12.218 "num_base_bdevs": 3, 00:08:12.218 "num_base_bdevs_discovered": 2, 00:08:12.218 "num_base_bdevs_operational": 3, 00:08:12.218 "base_bdevs_list": [ 00:08:12.218 { 00:08:12.218 "name": "BaseBdev1", 00:08:12.218 "uuid": "66bec237-bc11-416e-b937-56b41399a341", 00:08:12.218 "is_configured": true, 00:08:12.218 "data_offset": 2048, 00:08:12.218 "data_size": 63488 00:08:12.218 }, 00:08:12.218 { 00:08:12.218 "name": "BaseBdev2", 00:08:12.218 "uuid": "bc943ea7-c697-44ee-8c09-537894495701", 00:08:12.218 "is_configured": true, 00:08:12.218 "data_offset": 2048, 00:08:12.218 "data_size": 63488 00:08:12.218 }, 00:08:12.218 { 00:08:12.218 "name": "BaseBdev3", 00:08:12.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:12.218 "is_configured": false, 00:08:12.218 "data_offset": 0, 00:08:12.218 "data_size": 0 00:08:12.218 } 00:08:12.218 ] 00:08:12.218 }' 00:08:12.218 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.218 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.478 [2024-11-16 18:48:55.857253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:12.478 [2024-11-16 18:48:55.857518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:12.478 [2024-11-16 18:48:55.857541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:12.478 [2024-11-16 18:48:55.857822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:12.478 BaseBdev3 00:08:12.478 [2024-11-16 18:48:55.857963] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:12.478 [2024-11-16 18:48:55.857975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:12.478 [2024-11-16 18:48:55.858109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.478 [ 00:08:12.478 { 00:08:12.478 "name": "BaseBdev3", 00:08:12.478 "aliases": [ 00:08:12.478 "42088fcc-a9e7-40a5-9cc3-3c1964fd2da5" 00:08:12.478 ], 00:08:12.478 "product_name": "Malloc disk", 00:08:12.478 "block_size": 512, 00:08:12.478 "num_blocks": 65536, 00:08:12.478 "uuid": "42088fcc-a9e7-40a5-9cc3-3c1964fd2da5", 00:08:12.478 "assigned_rate_limits": { 00:08:12.478 "rw_ios_per_sec": 0, 00:08:12.478 "rw_mbytes_per_sec": 0, 00:08:12.478 "r_mbytes_per_sec": 0, 00:08:12.478 "w_mbytes_per_sec": 0 00:08:12.478 }, 00:08:12.478 "claimed": true, 00:08:12.478 "claim_type": "exclusive_write", 00:08:12.478 "zoned": false, 00:08:12.478 "supported_io_types": { 00:08:12.478 "read": true, 00:08:12.478 "write": true, 00:08:12.478 "unmap": true, 00:08:12.478 "flush": true, 00:08:12.478 "reset": true, 00:08:12.478 "nvme_admin": false, 00:08:12.478 "nvme_io": false, 00:08:12.478 "nvme_io_md": false, 00:08:12.478 "write_zeroes": true, 00:08:12.478 "zcopy": true, 00:08:12.478 "get_zone_info": false, 00:08:12.478 "zone_management": false, 00:08:12.478 "zone_append": false, 00:08:12.478 "compare": false, 00:08:12.478 "compare_and_write": false, 00:08:12.478 "abort": true, 00:08:12.478 "seek_hole": false, 00:08:12.478 "seek_data": false, 00:08:12.478 "copy": true, 00:08:12.478 "nvme_iov_md": false 00:08:12.478 }, 00:08:12.478 "memory_domains": [ 00:08:12.478 { 00:08:12.478 "dma_device_id": "system", 00:08:12.478 "dma_device_type": 1 00:08:12.478 }, 00:08:12.478 { 00:08:12.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.478 "dma_device_type": 2 00:08:12.478 } 00:08:12.478 ], 00:08:12.478 "driver_specific": {} 00:08:12.478 } 00:08:12.478 ] 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.478 "name": "Existed_Raid", 00:08:12.478 "uuid": "807fe938-24fe-435d-acbf-b0612541ff0d", 00:08:12.478 "strip_size_kb": 64, 00:08:12.478 "state": "online", 00:08:12.478 "raid_level": "raid0", 00:08:12.478 "superblock": true, 00:08:12.478 "num_base_bdevs": 3, 00:08:12.478 "num_base_bdevs_discovered": 3, 00:08:12.478 "num_base_bdevs_operational": 3, 00:08:12.478 "base_bdevs_list": [ 00:08:12.478 { 00:08:12.478 "name": "BaseBdev1", 00:08:12.478 "uuid": "66bec237-bc11-416e-b937-56b41399a341", 00:08:12.478 "is_configured": true, 00:08:12.478 "data_offset": 2048, 00:08:12.478 "data_size": 63488 00:08:12.478 }, 00:08:12.478 { 00:08:12.478 "name": "BaseBdev2", 00:08:12.478 "uuid": "bc943ea7-c697-44ee-8c09-537894495701", 00:08:12.478 "is_configured": true, 00:08:12.478 "data_offset": 2048, 00:08:12.478 "data_size": 63488 00:08:12.478 }, 00:08:12.478 { 00:08:12.478 "name": "BaseBdev3", 00:08:12.478 "uuid": "42088fcc-a9e7-40a5-9cc3-3c1964fd2da5", 00:08:12.478 "is_configured": true, 00:08:12.478 "data_offset": 2048, 00:08:12.478 "data_size": 63488 00:08:12.478 } 00:08:12.478 ] 00:08:12.478 }' 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.478 18:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.048 [2024-11-16 18:48:56.324822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.048 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:13.048 "name": "Existed_Raid", 00:08:13.048 "aliases": [ 00:08:13.048 "807fe938-24fe-435d-acbf-b0612541ff0d" 00:08:13.048 ], 00:08:13.048 "product_name": "Raid Volume", 00:08:13.048 "block_size": 512, 00:08:13.048 "num_blocks": 190464, 00:08:13.048 "uuid": "807fe938-24fe-435d-acbf-b0612541ff0d", 00:08:13.048 "assigned_rate_limits": { 00:08:13.048 "rw_ios_per_sec": 0, 00:08:13.048 "rw_mbytes_per_sec": 0, 00:08:13.048 "r_mbytes_per_sec": 0, 00:08:13.048 "w_mbytes_per_sec": 0 00:08:13.048 }, 00:08:13.048 "claimed": false, 00:08:13.048 "zoned": false, 00:08:13.048 "supported_io_types": { 00:08:13.048 "read": true, 00:08:13.048 "write": true, 00:08:13.048 "unmap": true, 00:08:13.048 "flush": true, 00:08:13.048 "reset": true, 00:08:13.048 "nvme_admin": false, 00:08:13.048 "nvme_io": false, 00:08:13.048 "nvme_io_md": false, 00:08:13.048 "write_zeroes": true, 00:08:13.048 "zcopy": false, 00:08:13.048 "get_zone_info": false, 00:08:13.048 "zone_management": false, 00:08:13.048 "zone_append": false, 00:08:13.048 "compare": false, 00:08:13.048 "compare_and_write": false, 00:08:13.048 "abort": false, 00:08:13.048 "seek_hole": false, 00:08:13.048 "seek_data": false, 00:08:13.048 "copy": false, 00:08:13.048 "nvme_iov_md": false 00:08:13.048 }, 00:08:13.048 "memory_domains": [ 00:08:13.048 { 00:08:13.048 "dma_device_id": "system", 00:08:13.048 "dma_device_type": 1 00:08:13.048 }, 00:08:13.048 { 00:08:13.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.048 "dma_device_type": 2 00:08:13.048 }, 00:08:13.048 { 00:08:13.048 "dma_device_id": "system", 00:08:13.048 "dma_device_type": 1 00:08:13.048 }, 00:08:13.048 { 00:08:13.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.048 "dma_device_type": 2 00:08:13.049 }, 00:08:13.049 { 00:08:13.049 "dma_device_id": "system", 00:08:13.049 "dma_device_type": 1 00:08:13.049 }, 00:08:13.049 { 00:08:13.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:13.049 "dma_device_type": 2 00:08:13.049 } 00:08:13.049 ], 00:08:13.049 "driver_specific": { 00:08:13.049 "raid": { 00:08:13.049 "uuid": "807fe938-24fe-435d-acbf-b0612541ff0d", 00:08:13.049 "strip_size_kb": 64, 00:08:13.049 "state": "online", 00:08:13.049 "raid_level": "raid0", 00:08:13.049 "superblock": true, 00:08:13.049 "num_base_bdevs": 3, 00:08:13.049 "num_base_bdevs_discovered": 3, 00:08:13.049 "num_base_bdevs_operational": 3, 00:08:13.049 "base_bdevs_list": [ 00:08:13.049 { 00:08:13.049 "name": "BaseBdev1", 00:08:13.049 "uuid": "66bec237-bc11-416e-b937-56b41399a341", 00:08:13.049 "is_configured": true, 00:08:13.049 "data_offset": 2048, 00:08:13.049 "data_size": 63488 00:08:13.049 }, 00:08:13.049 { 00:08:13.049 "name": "BaseBdev2", 00:08:13.049 "uuid": "bc943ea7-c697-44ee-8c09-537894495701", 00:08:13.049 "is_configured": true, 00:08:13.049 "data_offset": 2048, 00:08:13.049 "data_size": 63488 00:08:13.049 }, 00:08:13.049 { 00:08:13.049 "name": "BaseBdev3", 00:08:13.049 "uuid": "42088fcc-a9e7-40a5-9cc3-3c1964fd2da5", 00:08:13.049 "is_configured": true, 00:08:13.049 "data_offset": 2048, 00:08:13.049 "data_size": 63488 00:08:13.049 } 00:08:13.049 ] 00:08:13.049 } 00:08:13.049 } 00:08:13.049 }' 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:13.049 BaseBdev2 00:08:13.049 BaseBdev3' 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.049 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.309 [2024-11-16 18:48:56.588079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:13.309 [2024-11-16 18:48:56.588110] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.309 [2024-11-16 18:48:56.588161] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.309 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.310 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.310 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.310 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.310 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.310 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.310 "name": "Existed_Raid", 00:08:13.310 "uuid": "807fe938-24fe-435d-acbf-b0612541ff0d", 00:08:13.310 "strip_size_kb": 64, 00:08:13.310 "state": "offline", 00:08:13.310 "raid_level": "raid0", 00:08:13.310 "superblock": true, 00:08:13.310 "num_base_bdevs": 3, 00:08:13.310 "num_base_bdevs_discovered": 2, 00:08:13.310 "num_base_bdevs_operational": 2, 00:08:13.310 "base_bdevs_list": [ 00:08:13.310 { 00:08:13.310 "name": null, 00:08:13.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:13.310 "is_configured": false, 00:08:13.310 "data_offset": 0, 00:08:13.310 "data_size": 63488 00:08:13.310 }, 00:08:13.310 { 00:08:13.310 "name": "BaseBdev2", 00:08:13.310 "uuid": "bc943ea7-c697-44ee-8c09-537894495701", 00:08:13.310 "is_configured": true, 00:08:13.310 "data_offset": 2048, 00:08:13.310 "data_size": 63488 00:08:13.310 }, 00:08:13.310 { 00:08:13.310 "name": "BaseBdev3", 00:08:13.310 "uuid": "42088fcc-a9e7-40a5-9cc3-3c1964fd2da5", 00:08:13.310 "is_configured": true, 00:08:13.310 "data_offset": 2048, 00:08:13.310 "data_size": 63488 00:08:13.310 } 00:08:13.310 ] 00:08:13.310 }' 00:08:13.310 18:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.310 18:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.880 [2024-11-16 18:48:57.130196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.880 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:13.880 [2024-11-16 18:48:57.265801] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:13.880 [2024-11-16 18:48:57.265855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.139 BaseBdev2 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:14.139 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.140 [ 00:08:14.140 { 00:08:14.140 "name": "BaseBdev2", 00:08:14.140 "aliases": [ 00:08:14.140 "0cc6a9f6-0815-4da8-8cdf-b8709fa65236" 00:08:14.140 ], 00:08:14.140 "product_name": "Malloc disk", 00:08:14.140 "block_size": 512, 00:08:14.140 "num_blocks": 65536, 00:08:14.140 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:14.140 "assigned_rate_limits": { 00:08:14.140 "rw_ios_per_sec": 0, 00:08:14.140 "rw_mbytes_per_sec": 0, 00:08:14.140 "r_mbytes_per_sec": 0, 00:08:14.140 "w_mbytes_per_sec": 0 00:08:14.140 }, 00:08:14.140 "claimed": false, 00:08:14.140 "zoned": false, 00:08:14.140 "supported_io_types": { 00:08:14.140 "read": true, 00:08:14.140 "write": true, 00:08:14.140 "unmap": true, 00:08:14.140 "flush": true, 00:08:14.140 "reset": true, 00:08:14.140 "nvme_admin": false, 00:08:14.140 "nvme_io": false, 00:08:14.140 "nvme_io_md": false, 00:08:14.140 "write_zeroes": true, 00:08:14.140 "zcopy": true, 00:08:14.140 "get_zone_info": false, 00:08:14.140 "zone_management": false, 00:08:14.140 "zone_append": false, 00:08:14.140 "compare": false, 00:08:14.140 "compare_and_write": false, 00:08:14.140 "abort": true, 00:08:14.140 "seek_hole": false, 00:08:14.140 "seek_data": false, 00:08:14.140 "copy": true, 00:08:14.140 "nvme_iov_md": false 00:08:14.140 }, 00:08:14.140 "memory_domains": [ 00:08:14.140 { 00:08:14.140 "dma_device_id": "system", 00:08:14.140 "dma_device_type": 1 00:08:14.140 }, 00:08:14.140 { 00:08:14.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.140 "dma_device_type": 2 00:08:14.140 } 00:08:14.140 ], 00:08:14.140 "driver_specific": {} 00:08:14.140 } 00:08:14.140 ] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.140 BaseBdev3 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.140 [ 00:08:14.140 { 00:08:14.140 "name": "BaseBdev3", 00:08:14.140 "aliases": [ 00:08:14.140 "622d237a-a885-4aeb-87dc-37c9b663a8fe" 00:08:14.140 ], 00:08:14.140 "product_name": "Malloc disk", 00:08:14.140 "block_size": 512, 00:08:14.140 "num_blocks": 65536, 00:08:14.140 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:14.140 "assigned_rate_limits": { 00:08:14.140 "rw_ios_per_sec": 0, 00:08:14.140 "rw_mbytes_per_sec": 0, 00:08:14.140 "r_mbytes_per_sec": 0, 00:08:14.140 "w_mbytes_per_sec": 0 00:08:14.140 }, 00:08:14.140 "claimed": false, 00:08:14.140 "zoned": false, 00:08:14.140 "supported_io_types": { 00:08:14.140 "read": true, 00:08:14.140 "write": true, 00:08:14.140 "unmap": true, 00:08:14.140 "flush": true, 00:08:14.140 "reset": true, 00:08:14.140 "nvme_admin": false, 00:08:14.140 "nvme_io": false, 00:08:14.140 "nvme_io_md": false, 00:08:14.140 "write_zeroes": true, 00:08:14.140 "zcopy": true, 00:08:14.140 "get_zone_info": false, 00:08:14.140 "zone_management": false, 00:08:14.140 "zone_append": false, 00:08:14.140 "compare": false, 00:08:14.140 "compare_and_write": false, 00:08:14.140 "abort": true, 00:08:14.140 "seek_hole": false, 00:08:14.140 "seek_data": false, 00:08:14.140 "copy": true, 00:08:14.140 "nvme_iov_md": false 00:08:14.140 }, 00:08:14.140 "memory_domains": [ 00:08:14.140 { 00:08:14.140 "dma_device_id": "system", 00:08:14.140 "dma_device_type": 1 00:08:14.140 }, 00:08:14.140 { 00:08:14.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:14.140 "dma_device_type": 2 00:08:14.140 } 00:08:14.140 ], 00:08:14.140 "driver_specific": {} 00:08:14.140 } 00:08:14.140 ] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.140 [2024-11-16 18:48:57.566780] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:14.140 [2024-11-16 18:48:57.566819] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:14.140 [2024-11-16 18:48:57.566839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:14.140 [2024-11-16 18:48:57.568532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.140 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.400 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.400 "name": "Existed_Raid", 00:08:14.400 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:14.400 "strip_size_kb": 64, 00:08:14.400 "state": "configuring", 00:08:14.400 "raid_level": "raid0", 00:08:14.400 "superblock": true, 00:08:14.400 "num_base_bdevs": 3, 00:08:14.400 "num_base_bdevs_discovered": 2, 00:08:14.400 "num_base_bdevs_operational": 3, 00:08:14.400 "base_bdevs_list": [ 00:08:14.400 { 00:08:14.400 "name": "BaseBdev1", 00:08:14.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.400 "is_configured": false, 00:08:14.400 "data_offset": 0, 00:08:14.400 "data_size": 0 00:08:14.400 }, 00:08:14.400 { 00:08:14.400 "name": "BaseBdev2", 00:08:14.400 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:14.400 "is_configured": true, 00:08:14.400 "data_offset": 2048, 00:08:14.400 "data_size": 63488 00:08:14.400 }, 00:08:14.400 { 00:08:14.400 "name": "BaseBdev3", 00:08:14.400 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:14.400 "is_configured": true, 00:08:14.400 "data_offset": 2048, 00:08:14.400 "data_size": 63488 00:08:14.400 } 00:08:14.400 ] 00:08:14.400 }' 00:08:14.400 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.400 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.660 [2024-11-16 18:48:57.990069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:14.660 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.661 18:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:14.661 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.661 18:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.661 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.661 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:14.661 "name": "Existed_Raid", 00:08:14.661 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:14.661 "strip_size_kb": 64, 00:08:14.661 "state": "configuring", 00:08:14.661 "raid_level": "raid0", 00:08:14.661 "superblock": true, 00:08:14.661 "num_base_bdevs": 3, 00:08:14.661 "num_base_bdevs_discovered": 1, 00:08:14.661 "num_base_bdevs_operational": 3, 00:08:14.661 "base_bdevs_list": [ 00:08:14.661 { 00:08:14.661 "name": "BaseBdev1", 00:08:14.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.661 "is_configured": false, 00:08:14.661 "data_offset": 0, 00:08:14.661 "data_size": 0 00:08:14.661 }, 00:08:14.661 { 00:08:14.661 "name": null, 00:08:14.661 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:14.661 "is_configured": false, 00:08:14.661 "data_offset": 0, 00:08:14.661 "data_size": 63488 00:08:14.661 }, 00:08:14.661 { 00:08:14.661 "name": "BaseBdev3", 00:08:14.661 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:14.661 "is_configured": true, 00:08:14.661 "data_offset": 2048, 00:08:14.661 "data_size": 63488 00:08:14.661 } 00:08:14.661 ] 00:08:14.661 }' 00:08:14.661 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:14.661 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:14.921 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:14.921 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:14.921 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:14.921 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.183 [2024-11-16 18:48:58.448244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.183 BaseBdev1 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.183 [ 00:08:15.183 { 00:08:15.183 "name": "BaseBdev1", 00:08:15.183 "aliases": [ 00:08:15.183 "c2669a3c-5433-4735-892a-a1596d1001b1" 00:08:15.183 ], 00:08:15.183 "product_name": "Malloc disk", 00:08:15.183 "block_size": 512, 00:08:15.183 "num_blocks": 65536, 00:08:15.183 "uuid": "c2669a3c-5433-4735-892a-a1596d1001b1", 00:08:15.183 "assigned_rate_limits": { 00:08:15.183 "rw_ios_per_sec": 0, 00:08:15.183 "rw_mbytes_per_sec": 0, 00:08:15.183 "r_mbytes_per_sec": 0, 00:08:15.183 "w_mbytes_per_sec": 0 00:08:15.183 }, 00:08:15.183 "claimed": true, 00:08:15.183 "claim_type": "exclusive_write", 00:08:15.183 "zoned": false, 00:08:15.183 "supported_io_types": { 00:08:15.183 "read": true, 00:08:15.183 "write": true, 00:08:15.183 "unmap": true, 00:08:15.183 "flush": true, 00:08:15.183 "reset": true, 00:08:15.183 "nvme_admin": false, 00:08:15.183 "nvme_io": false, 00:08:15.183 "nvme_io_md": false, 00:08:15.183 "write_zeroes": true, 00:08:15.183 "zcopy": true, 00:08:15.183 "get_zone_info": false, 00:08:15.183 "zone_management": false, 00:08:15.183 "zone_append": false, 00:08:15.183 "compare": false, 00:08:15.183 "compare_and_write": false, 00:08:15.183 "abort": true, 00:08:15.183 "seek_hole": false, 00:08:15.183 "seek_data": false, 00:08:15.183 "copy": true, 00:08:15.183 "nvme_iov_md": false 00:08:15.183 }, 00:08:15.183 "memory_domains": [ 00:08:15.183 { 00:08:15.183 "dma_device_id": "system", 00:08:15.183 "dma_device_type": 1 00:08:15.183 }, 00:08:15.183 { 00:08:15.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.183 "dma_device_type": 2 00:08:15.183 } 00:08:15.183 ], 00:08:15.183 "driver_specific": {} 00:08:15.183 } 00:08:15.183 ] 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.183 "name": "Existed_Raid", 00:08:15.183 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:15.183 "strip_size_kb": 64, 00:08:15.183 "state": "configuring", 00:08:15.183 "raid_level": "raid0", 00:08:15.183 "superblock": true, 00:08:15.183 "num_base_bdevs": 3, 00:08:15.183 "num_base_bdevs_discovered": 2, 00:08:15.183 "num_base_bdevs_operational": 3, 00:08:15.183 "base_bdevs_list": [ 00:08:15.183 { 00:08:15.183 "name": "BaseBdev1", 00:08:15.183 "uuid": "c2669a3c-5433-4735-892a-a1596d1001b1", 00:08:15.183 "is_configured": true, 00:08:15.183 "data_offset": 2048, 00:08:15.183 "data_size": 63488 00:08:15.183 }, 00:08:15.183 { 00:08:15.183 "name": null, 00:08:15.183 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:15.183 "is_configured": false, 00:08:15.183 "data_offset": 0, 00:08:15.183 "data_size": 63488 00:08:15.183 }, 00:08:15.183 { 00:08:15.183 "name": "BaseBdev3", 00:08:15.183 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:15.183 "is_configured": true, 00:08:15.183 "data_offset": 2048, 00:08:15.183 "data_size": 63488 00:08:15.183 } 00:08:15.183 ] 00:08:15.183 }' 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.183 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.753 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.754 [2024-11-16 18:48:58.979365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:15.754 18:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.754 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.754 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.754 "name": "Existed_Raid", 00:08:15.754 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:15.754 "strip_size_kb": 64, 00:08:15.754 "state": "configuring", 00:08:15.754 "raid_level": "raid0", 00:08:15.754 "superblock": true, 00:08:15.754 "num_base_bdevs": 3, 00:08:15.754 "num_base_bdevs_discovered": 1, 00:08:15.754 "num_base_bdevs_operational": 3, 00:08:15.754 "base_bdevs_list": [ 00:08:15.754 { 00:08:15.754 "name": "BaseBdev1", 00:08:15.754 "uuid": "c2669a3c-5433-4735-892a-a1596d1001b1", 00:08:15.754 "is_configured": true, 00:08:15.754 "data_offset": 2048, 00:08:15.754 "data_size": 63488 00:08:15.754 }, 00:08:15.754 { 00:08:15.754 "name": null, 00:08:15.754 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:15.754 "is_configured": false, 00:08:15.754 "data_offset": 0, 00:08:15.754 "data_size": 63488 00:08:15.754 }, 00:08:15.754 { 00:08:15.754 "name": null, 00:08:15.754 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:15.754 "is_configured": false, 00:08:15.754 "data_offset": 0, 00:08:15.754 "data_size": 63488 00:08:15.754 } 00:08:15.754 ] 00:08:15.754 }' 00:08:15.754 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.754 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.013 [2024-11-16 18:48:59.442612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.013 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.014 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.274 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.274 "name": "Existed_Raid", 00:08:16.274 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:16.274 "strip_size_kb": 64, 00:08:16.274 "state": "configuring", 00:08:16.274 "raid_level": "raid0", 00:08:16.274 "superblock": true, 00:08:16.274 "num_base_bdevs": 3, 00:08:16.274 "num_base_bdevs_discovered": 2, 00:08:16.274 "num_base_bdevs_operational": 3, 00:08:16.274 "base_bdevs_list": [ 00:08:16.274 { 00:08:16.274 "name": "BaseBdev1", 00:08:16.274 "uuid": "c2669a3c-5433-4735-892a-a1596d1001b1", 00:08:16.274 "is_configured": true, 00:08:16.274 "data_offset": 2048, 00:08:16.274 "data_size": 63488 00:08:16.274 }, 00:08:16.274 { 00:08:16.274 "name": null, 00:08:16.274 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:16.274 "is_configured": false, 00:08:16.274 "data_offset": 0, 00:08:16.274 "data_size": 63488 00:08:16.274 }, 00:08:16.274 { 00:08:16.274 "name": "BaseBdev3", 00:08:16.274 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:16.274 "is_configured": true, 00:08:16.274 "data_offset": 2048, 00:08:16.274 "data_size": 63488 00:08:16.274 } 00:08:16.274 ] 00:08:16.274 }' 00:08:16.274 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.274 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.534 [2024-11-16 18:48:59.905865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.534 18:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.534 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.534 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.534 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:16.534 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:16.794 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:16.794 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.794 "name": "Existed_Raid", 00:08:16.794 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:16.794 "strip_size_kb": 64, 00:08:16.794 "state": "configuring", 00:08:16.794 "raid_level": "raid0", 00:08:16.794 "superblock": true, 00:08:16.794 "num_base_bdevs": 3, 00:08:16.794 "num_base_bdevs_discovered": 1, 00:08:16.794 "num_base_bdevs_operational": 3, 00:08:16.794 "base_bdevs_list": [ 00:08:16.794 { 00:08:16.794 "name": null, 00:08:16.794 "uuid": "c2669a3c-5433-4735-892a-a1596d1001b1", 00:08:16.794 "is_configured": false, 00:08:16.794 "data_offset": 0, 00:08:16.794 "data_size": 63488 00:08:16.794 }, 00:08:16.794 { 00:08:16.794 "name": null, 00:08:16.794 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:16.794 "is_configured": false, 00:08:16.794 "data_offset": 0, 00:08:16.794 "data_size": 63488 00:08:16.794 }, 00:08:16.794 { 00:08:16.794 "name": "BaseBdev3", 00:08:16.794 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:16.794 "is_configured": true, 00:08:16.794 "data_offset": 2048, 00:08:16.794 "data_size": 63488 00:08:16.794 } 00:08:16.794 ] 00:08:16.794 }' 00:08:16.794 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.794 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.054 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:17.054 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.054 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.054 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.054 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.054 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.055 [2024-11-16 18:49:00.501599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.055 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.315 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.315 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.315 "name": "Existed_Raid", 00:08:17.315 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:17.315 "strip_size_kb": 64, 00:08:17.315 "state": "configuring", 00:08:17.315 "raid_level": "raid0", 00:08:17.315 "superblock": true, 00:08:17.315 "num_base_bdevs": 3, 00:08:17.315 "num_base_bdevs_discovered": 2, 00:08:17.315 "num_base_bdevs_operational": 3, 00:08:17.315 "base_bdevs_list": [ 00:08:17.315 { 00:08:17.315 "name": null, 00:08:17.315 "uuid": "c2669a3c-5433-4735-892a-a1596d1001b1", 00:08:17.315 "is_configured": false, 00:08:17.315 "data_offset": 0, 00:08:17.315 "data_size": 63488 00:08:17.315 }, 00:08:17.315 { 00:08:17.315 "name": "BaseBdev2", 00:08:17.315 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:17.315 "is_configured": true, 00:08:17.315 "data_offset": 2048, 00:08:17.315 "data_size": 63488 00:08:17.315 }, 00:08:17.315 { 00:08:17.315 "name": "BaseBdev3", 00:08:17.315 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:17.315 "is_configured": true, 00:08:17.315 "data_offset": 2048, 00:08:17.315 "data_size": 63488 00:08:17.315 } 00:08:17.315 ] 00:08:17.315 }' 00:08:17.315 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.315 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c2669a3c-5433-4735-892a-a1596d1001b1 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.575 18:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.575 [2024-11-16 18:49:01.004264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:17.575 [2024-11-16 18:49:01.004482] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:17.575 [2024-11-16 18:49:01.004498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:17.575 [2024-11-16 18:49:01.004767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:17.575 [2024-11-16 18:49:01.004916] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:17.575 [2024-11-16 18:49:01.004930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:17.575 [2024-11-16 18:49:01.005068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.575 NewBaseBdev 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.575 [ 00:08:17.575 { 00:08:17.575 "name": "NewBaseBdev", 00:08:17.575 "aliases": [ 00:08:17.575 "c2669a3c-5433-4735-892a-a1596d1001b1" 00:08:17.575 ], 00:08:17.575 "product_name": "Malloc disk", 00:08:17.575 "block_size": 512, 00:08:17.575 "num_blocks": 65536, 00:08:17.575 "uuid": "c2669a3c-5433-4735-892a-a1596d1001b1", 00:08:17.575 "assigned_rate_limits": { 00:08:17.575 "rw_ios_per_sec": 0, 00:08:17.575 "rw_mbytes_per_sec": 0, 00:08:17.575 "r_mbytes_per_sec": 0, 00:08:17.575 "w_mbytes_per_sec": 0 00:08:17.575 }, 00:08:17.575 "claimed": true, 00:08:17.575 "claim_type": "exclusive_write", 00:08:17.575 "zoned": false, 00:08:17.575 "supported_io_types": { 00:08:17.575 "read": true, 00:08:17.575 "write": true, 00:08:17.575 "unmap": true, 00:08:17.575 "flush": true, 00:08:17.575 "reset": true, 00:08:17.575 "nvme_admin": false, 00:08:17.575 "nvme_io": false, 00:08:17.575 "nvme_io_md": false, 00:08:17.575 "write_zeroes": true, 00:08:17.575 "zcopy": true, 00:08:17.575 "get_zone_info": false, 00:08:17.575 "zone_management": false, 00:08:17.575 "zone_append": false, 00:08:17.575 "compare": false, 00:08:17.575 "compare_and_write": false, 00:08:17.575 "abort": true, 00:08:17.575 "seek_hole": false, 00:08:17.575 "seek_data": false, 00:08:17.575 "copy": true, 00:08:17.575 "nvme_iov_md": false 00:08:17.575 }, 00:08:17.575 "memory_domains": [ 00:08:17.575 { 00:08:17.575 "dma_device_id": "system", 00:08:17.575 "dma_device_type": 1 00:08:17.575 }, 00:08:17.575 { 00:08:17.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.575 "dma_device_type": 2 00:08:17.575 } 00:08:17.575 ], 00:08:17.575 "driver_specific": {} 00:08:17.575 } 00:08:17.575 ] 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:17.575 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:17.576 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.576 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.836 "name": "Existed_Raid", 00:08:17.836 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:17.836 "strip_size_kb": 64, 00:08:17.836 "state": "online", 00:08:17.836 "raid_level": "raid0", 00:08:17.836 "superblock": true, 00:08:17.836 "num_base_bdevs": 3, 00:08:17.836 "num_base_bdevs_discovered": 3, 00:08:17.836 "num_base_bdevs_operational": 3, 00:08:17.836 "base_bdevs_list": [ 00:08:17.836 { 00:08:17.836 "name": "NewBaseBdev", 00:08:17.836 "uuid": "c2669a3c-5433-4735-892a-a1596d1001b1", 00:08:17.836 "is_configured": true, 00:08:17.836 "data_offset": 2048, 00:08:17.836 "data_size": 63488 00:08:17.836 }, 00:08:17.836 { 00:08:17.836 "name": "BaseBdev2", 00:08:17.836 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:17.836 "is_configured": true, 00:08:17.836 "data_offset": 2048, 00:08:17.836 "data_size": 63488 00:08:17.836 }, 00:08:17.836 { 00:08:17.836 "name": "BaseBdev3", 00:08:17.836 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:17.836 "is_configured": true, 00:08:17.836 "data_offset": 2048, 00:08:17.836 "data_size": 63488 00:08:17.836 } 00:08:17.836 ] 00:08:17.836 }' 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.836 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.096 [2024-11-16 18:49:01.475806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.096 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:18.096 "name": "Existed_Raid", 00:08:18.096 "aliases": [ 00:08:18.096 "e26242d6-500a-45ac-be78-83876c80f31e" 00:08:18.096 ], 00:08:18.096 "product_name": "Raid Volume", 00:08:18.096 "block_size": 512, 00:08:18.096 "num_blocks": 190464, 00:08:18.096 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:18.096 "assigned_rate_limits": { 00:08:18.096 "rw_ios_per_sec": 0, 00:08:18.096 "rw_mbytes_per_sec": 0, 00:08:18.096 "r_mbytes_per_sec": 0, 00:08:18.097 "w_mbytes_per_sec": 0 00:08:18.097 }, 00:08:18.097 "claimed": false, 00:08:18.097 "zoned": false, 00:08:18.097 "supported_io_types": { 00:08:18.097 "read": true, 00:08:18.097 "write": true, 00:08:18.097 "unmap": true, 00:08:18.097 "flush": true, 00:08:18.097 "reset": true, 00:08:18.097 "nvme_admin": false, 00:08:18.097 "nvme_io": false, 00:08:18.097 "nvme_io_md": false, 00:08:18.097 "write_zeroes": true, 00:08:18.097 "zcopy": false, 00:08:18.097 "get_zone_info": false, 00:08:18.097 "zone_management": false, 00:08:18.097 "zone_append": false, 00:08:18.097 "compare": false, 00:08:18.097 "compare_and_write": false, 00:08:18.097 "abort": false, 00:08:18.097 "seek_hole": false, 00:08:18.097 "seek_data": false, 00:08:18.097 "copy": false, 00:08:18.097 "nvme_iov_md": false 00:08:18.097 }, 00:08:18.097 "memory_domains": [ 00:08:18.097 { 00:08:18.097 "dma_device_id": "system", 00:08:18.097 "dma_device_type": 1 00:08:18.097 }, 00:08:18.097 { 00:08:18.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.097 "dma_device_type": 2 00:08:18.097 }, 00:08:18.097 { 00:08:18.097 "dma_device_id": "system", 00:08:18.097 "dma_device_type": 1 00:08:18.097 }, 00:08:18.097 { 00:08:18.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.097 "dma_device_type": 2 00:08:18.097 }, 00:08:18.097 { 00:08:18.097 "dma_device_id": "system", 00:08:18.097 "dma_device_type": 1 00:08:18.097 }, 00:08:18.097 { 00:08:18.097 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.097 "dma_device_type": 2 00:08:18.097 } 00:08:18.097 ], 00:08:18.097 "driver_specific": { 00:08:18.097 "raid": { 00:08:18.097 "uuid": "e26242d6-500a-45ac-be78-83876c80f31e", 00:08:18.097 "strip_size_kb": 64, 00:08:18.097 "state": "online", 00:08:18.097 "raid_level": "raid0", 00:08:18.097 "superblock": true, 00:08:18.097 "num_base_bdevs": 3, 00:08:18.097 "num_base_bdevs_discovered": 3, 00:08:18.097 "num_base_bdevs_operational": 3, 00:08:18.097 "base_bdevs_list": [ 00:08:18.097 { 00:08:18.097 "name": "NewBaseBdev", 00:08:18.097 "uuid": "c2669a3c-5433-4735-892a-a1596d1001b1", 00:08:18.097 "is_configured": true, 00:08:18.097 "data_offset": 2048, 00:08:18.097 "data_size": 63488 00:08:18.097 }, 00:08:18.097 { 00:08:18.097 "name": "BaseBdev2", 00:08:18.097 "uuid": "0cc6a9f6-0815-4da8-8cdf-b8709fa65236", 00:08:18.097 "is_configured": true, 00:08:18.097 "data_offset": 2048, 00:08:18.097 "data_size": 63488 00:08:18.097 }, 00:08:18.097 { 00:08:18.097 "name": "BaseBdev3", 00:08:18.097 "uuid": "622d237a-a885-4aeb-87dc-37c9b663a8fe", 00:08:18.097 "is_configured": true, 00:08:18.097 "data_offset": 2048, 00:08:18.097 "data_size": 63488 00:08:18.097 } 00:08:18.097 ] 00:08:18.097 } 00:08:18.097 } 00:08:18.097 }' 00:08:18.097 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:18.097 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:18.097 BaseBdev2 00:08:18.097 BaseBdev3' 00:08:18.097 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:18.357 [2024-11-16 18:49:01.703110] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:18.357 [2024-11-16 18:49:01.703137] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.357 [2024-11-16 18:49:01.703205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.357 [2024-11-16 18:49:01.703254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.357 [2024-11-16 18:49:01.703265] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64285 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64285 ']' 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64285 00:08:18.357 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:18.358 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.358 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64285 00:08:18.358 killing process with pid 64285 00:08:18.358 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.358 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.358 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64285' 00:08:18.358 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64285 00:08:18.358 [2024-11-16 18:49:01.749421] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.358 18:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64285 00:08:18.617 [2024-11-16 18:49:02.030540] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:19.998 ************************************ 00:08:19.998 END TEST raid_state_function_test_sb 00:08:19.998 ************************************ 00:08:19.998 18:49:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:19.998 00:08:19.998 real 0m10.100s 00:08:19.998 user 0m16.107s 00:08:19.998 sys 0m1.712s 00:08:19.999 18:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:19.999 18:49:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:19.999 18:49:03 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:08:19.999 18:49:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:19.999 18:49:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:19.999 18:49:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:19.999 ************************************ 00:08:19.999 START TEST raid_superblock_test 00:08:19.999 ************************************ 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64894 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64894 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64894 ']' 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:19.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:19.999 18:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.999 [2024-11-16 18:49:03.245931] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:19.999 [2024-11-16 18:49:03.246040] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64894 ] 00:08:19.999 [2024-11-16 18:49:03.412164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.258 [2024-11-16 18:49:03.521293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.258 [2024-11-16 18:49:03.709172] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.258 [2024-11-16 18:49:03.709222] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:20.829 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:20.829 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:20.829 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:20.829 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.830 malloc1 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.830 [2024-11-16 18:49:04.113837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:20.830 [2024-11-16 18:49:04.113917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.830 [2024-11-16 18:49:04.113940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:20.830 [2024-11-16 18:49:04.113949] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.830 [2024-11-16 18:49:04.115997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.830 [2024-11-16 18:49:04.116036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:20.830 pt1 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.830 malloc2 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.830 [2024-11-16 18:49:04.167258] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:20.830 [2024-11-16 18:49:04.167311] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.830 [2024-11-16 18:49:04.167351] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:20.830 [2024-11-16 18:49:04.167359] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.830 [2024-11-16 18:49:04.169436] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.830 [2024-11-16 18:49:04.169473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:20.830 pt2 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.830 malloc3 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.830 [2024-11-16 18:49:04.235101] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:20.830 [2024-11-16 18:49:04.235168] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:20.830 [2024-11-16 18:49:04.235188] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:20.830 [2024-11-16 18:49:04.235197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:20.830 [2024-11-16 18:49:04.237219] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:20.830 [2024-11-16 18:49:04.237255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:20.830 pt3 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.830 [2024-11-16 18:49:04.247131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:20.830 [2024-11-16 18:49:04.248933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:20.830 [2024-11-16 18:49:04.249001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:20.830 [2024-11-16 18:49:04.249160] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:20.830 [2024-11-16 18:49:04.249173] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:20.830 [2024-11-16 18:49:04.249424] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:20.830 [2024-11-16 18:49:04.249587] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:20.830 [2024-11-16 18:49:04.249604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:20.830 [2024-11-16 18:49:04.249768] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.830 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.090 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.090 "name": "raid_bdev1", 00:08:21.090 "uuid": "e03c8da8-554b-4641-a3a2-cdc4edd81eb2", 00:08:21.090 "strip_size_kb": 64, 00:08:21.090 "state": "online", 00:08:21.090 "raid_level": "raid0", 00:08:21.090 "superblock": true, 00:08:21.090 "num_base_bdevs": 3, 00:08:21.090 "num_base_bdevs_discovered": 3, 00:08:21.090 "num_base_bdevs_operational": 3, 00:08:21.090 "base_bdevs_list": [ 00:08:21.090 { 00:08:21.090 "name": "pt1", 00:08:21.090 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.090 "is_configured": true, 00:08:21.090 "data_offset": 2048, 00:08:21.090 "data_size": 63488 00:08:21.090 }, 00:08:21.090 { 00:08:21.090 "name": "pt2", 00:08:21.090 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.090 "is_configured": true, 00:08:21.090 "data_offset": 2048, 00:08:21.090 "data_size": 63488 00:08:21.090 }, 00:08:21.090 { 00:08:21.090 "name": "pt3", 00:08:21.090 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:21.090 "is_configured": true, 00:08:21.090 "data_offset": 2048, 00:08:21.090 "data_size": 63488 00:08:21.090 } 00:08:21.090 ] 00:08:21.090 }' 00:08:21.090 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.090 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.350 [2024-11-16 18:49:04.674685] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:21.350 "name": "raid_bdev1", 00:08:21.350 "aliases": [ 00:08:21.350 "e03c8da8-554b-4641-a3a2-cdc4edd81eb2" 00:08:21.350 ], 00:08:21.350 "product_name": "Raid Volume", 00:08:21.350 "block_size": 512, 00:08:21.350 "num_blocks": 190464, 00:08:21.350 "uuid": "e03c8da8-554b-4641-a3a2-cdc4edd81eb2", 00:08:21.350 "assigned_rate_limits": { 00:08:21.350 "rw_ios_per_sec": 0, 00:08:21.350 "rw_mbytes_per_sec": 0, 00:08:21.350 "r_mbytes_per_sec": 0, 00:08:21.350 "w_mbytes_per_sec": 0 00:08:21.350 }, 00:08:21.350 "claimed": false, 00:08:21.350 "zoned": false, 00:08:21.350 "supported_io_types": { 00:08:21.350 "read": true, 00:08:21.350 "write": true, 00:08:21.350 "unmap": true, 00:08:21.350 "flush": true, 00:08:21.350 "reset": true, 00:08:21.350 "nvme_admin": false, 00:08:21.350 "nvme_io": false, 00:08:21.350 "nvme_io_md": false, 00:08:21.350 "write_zeroes": true, 00:08:21.350 "zcopy": false, 00:08:21.350 "get_zone_info": false, 00:08:21.350 "zone_management": false, 00:08:21.350 "zone_append": false, 00:08:21.350 "compare": false, 00:08:21.350 "compare_and_write": false, 00:08:21.350 "abort": false, 00:08:21.350 "seek_hole": false, 00:08:21.350 "seek_data": false, 00:08:21.350 "copy": false, 00:08:21.350 "nvme_iov_md": false 00:08:21.350 }, 00:08:21.350 "memory_domains": [ 00:08:21.350 { 00:08:21.350 "dma_device_id": "system", 00:08:21.350 "dma_device_type": 1 00:08:21.350 }, 00:08:21.350 { 00:08:21.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.350 "dma_device_type": 2 00:08:21.350 }, 00:08:21.350 { 00:08:21.350 "dma_device_id": "system", 00:08:21.350 "dma_device_type": 1 00:08:21.350 }, 00:08:21.350 { 00:08:21.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.350 "dma_device_type": 2 00:08:21.350 }, 00:08:21.350 { 00:08:21.350 "dma_device_id": "system", 00:08:21.350 "dma_device_type": 1 00:08:21.350 }, 00:08:21.350 { 00:08:21.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.350 "dma_device_type": 2 00:08:21.350 } 00:08:21.350 ], 00:08:21.350 "driver_specific": { 00:08:21.350 "raid": { 00:08:21.350 "uuid": "e03c8da8-554b-4641-a3a2-cdc4edd81eb2", 00:08:21.350 "strip_size_kb": 64, 00:08:21.350 "state": "online", 00:08:21.350 "raid_level": "raid0", 00:08:21.350 "superblock": true, 00:08:21.350 "num_base_bdevs": 3, 00:08:21.350 "num_base_bdevs_discovered": 3, 00:08:21.350 "num_base_bdevs_operational": 3, 00:08:21.350 "base_bdevs_list": [ 00:08:21.350 { 00:08:21.350 "name": "pt1", 00:08:21.350 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.350 "is_configured": true, 00:08:21.350 "data_offset": 2048, 00:08:21.350 "data_size": 63488 00:08:21.350 }, 00:08:21.350 { 00:08:21.350 "name": "pt2", 00:08:21.350 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.350 "is_configured": true, 00:08:21.350 "data_offset": 2048, 00:08:21.350 "data_size": 63488 00:08:21.350 }, 00:08:21.350 { 00:08:21.350 "name": "pt3", 00:08:21.350 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:21.350 "is_configured": true, 00:08:21.350 "data_offset": 2048, 00:08:21.350 "data_size": 63488 00:08:21.350 } 00:08:21.350 ] 00:08:21.350 } 00:08:21.350 } 00:08:21.350 }' 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:21.350 pt2 00:08:21.350 pt3' 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.350 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:21.611 [2024-11-16 18:49:04.914220] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e03c8da8-554b-4641-a3a2-cdc4edd81eb2 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e03c8da8-554b-4641-a3a2-cdc4edd81eb2 ']' 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 [2024-11-16 18:49:04.961872] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.611 [2024-11-16 18:49:04.961902] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.611 [2024-11-16 18:49:04.961974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.611 [2024-11-16 18:49:04.962033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.611 [2024-11-16 18:49:04.962043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 18:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.611 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.872 [2024-11-16 18:49:05.109677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:21.872 [2024-11-16 18:49:05.111594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:21.872 [2024-11-16 18:49:05.111663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:21.872 [2024-11-16 18:49:05.111714] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:21.872 [2024-11-16 18:49:05.111758] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:21.872 [2024-11-16 18:49:05.111778] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:21.872 [2024-11-16 18:49:05.111795] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:21.872 [2024-11-16 18:49:05.111807] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:21.872 request: 00:08:21.872 { 00:08:21.872 "name": "raid_bdev1", 00:08:21.872 "raid_level": "raid0", 00:08:21.872 "base_bdevs": [ 00:08:21.872 "malloc1", 00:08:21.872 "malloc2", 00:08:21.872 "malloc3" 00:08:21.872 ], 00:08:21.872 "strip_size_kb": 64, 00:08:21.872 "superblock": false, 00:08:21.872 "method": "bdev_raid_create", 00:08:21.872 "req_id": 1 00:08:21.872 } 00:08:21.872 Got JSON-RPC error response 00:08:21.872 response: 00:08:21.872 { 00:08:21.872 "code": -17, 00:08:21.872 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:21.872 } 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.872 [2024-11-16 18:49:05.173504] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:21.872 [2024-11-16 18:49:05.173554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:21.872 [2024-11-16 18:49:05.173571] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:21.872 [2024-11-16 18:49:05.173580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:21.872 [2024-11-16 18:49:05.175722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:21.872 [2024-11-16 18:49:05.175756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:21.872 [2024-11-16 18:49:05.175835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:21.872 [2024-11-16 18:49:05.175897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:21.872 pt1 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.872 "name": "raid_bdev1", 00:08:21.872 "uuid": "e03c8da8-554b-4641-a3a2-cdc4edd81eb2", 00:08:21.872 "strip_size_kb": 64, 00:08:21.872 "state": "configuring", 00:08:21.872 "raid_level": "raid0", 00:08:21.872 "superblock": true, 00:08:21.872 "num_base_bdevs": 3, 00:08:21.872 "num_base_bdevs_discovered": 1, 00:08:21.872 "num_base_bdevs_operational": 3, 00:08:21.872 "base_bdevs_list": [ 00:08:21.872 { 00:08:21.872 "name": "pt1", 00:08:21.872 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:21.872 "is_configured": true, 00:08:21.872 "data_offset": 2048, 00:08:21.872 "data_size": 63488 00:08:21.872 }, 00:08:21.872 { 00:08:21.872 "name": null, 00:08:21.872 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:21.872 "is_configured": false, 00:08:21.872 "data_offset": 2048, 00:08:21.872 "data_size": 63488 00:08:21.872 }, 00:08:21.872 { 00:08:21.872 "name": null, 00:08:21.872 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:21.872 "is_configured": false, 00:08:21.872 "data_offset": 2048, 00:08:21.872 "data_size": 63488 00:08:21.872 } 00:08:21.872 ] 00:08:21.872 }' 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.872 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.133 [2024-11-16 18:49:05.524923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:22.133 [2024-11-16 18:49:05.524989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.133 [2024-11-16 18:49:05.525011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:22.133 [2024-11-16 18:49:05.525020] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.133 [2024-11-16 18:49:05.525457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.133 [2024-11-16 18:49:05.525480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:22.133 [2024-11-16 18:49:05.525565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:22.133 [2024-11-16 18:49:05.525591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:22.133 pt2 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.133 [2024-11-16 18:49:05.536898] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.133 "name": "raid_bdev1", 00:08:22.133 "uuid": "e03c8da8-554b-4641-a3a2-cdc4edd81eb2", 00:08:22.133 "strip_size_kb": 64, 00:08:22.133 "state": "configuring", 00:08:22.133 "raid_level": "raid0", 00:08:22.133 "superblock": true, 00:08:22.133 "num_base_bdevs": 3, 00:08:22.133 "num_base_bdevs_discovered": 1, 00:08:22.133 "num_base_bdevs_operational": 3, 00:08:22.133 "base_bdevs_list": [ 00:08:22.133 { 00:08:22.133 "name": "pt1", 00:08:22.133 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.133 "is_configured": true, 00:08:22.133 "data_offset": 2048, 00:08:22.133 "data_size": 63488 00:08:22.133 }, 00:08:22.133 { 00:08:22.133 "name": null, 00:08:22.133 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.133 "is_configured": false, 00:08:22.133 "data_offset": 0, 00:08:22.133 "data_size": 63488 00:08:22.133 }, 00:08:22.133 { 00:08:22.133 "name": null, 00:08:22.133 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:22.133 "is_configured": false, 00:08:22.133 "data_offset": 2048, 00:08:22.133 "data_size": 63488 00:08:22.133 } 00:08:22.133 ] 00:08:22.133 }' 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.133 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.703 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:22.703 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.703 18:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:22.703 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.703 18:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.703 [2024-11-16 18:49:06.000095] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:22.703 [2024-11-16 18:49:06.000164] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.703 [2024-11-16 18:49:06.000182] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:22.703 [2024-11-16 18:49:06.000193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.703 [2024-11-16 18:49:06.000650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.703 [2024-11-16 18:49:06.000690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:22.703 [2024-11-16 18:49:06.000778] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:22.703 [2024-11-16 18:49:06.000809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:22.703 pt2 00:08:22.703 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.703 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:22.703 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.703 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:22.703 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.703 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.703 [2024-11-16 18:49:06.012073] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:22.704 [2024-11-16 18:49:06.012124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:22.704 [2024-11-16 18:49:06.012139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:22.704 [2024-11-16 18:49:06.012149] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:22.704 [2024-11-16 18:49:06.012546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:22.704 [2024-11-16 18:49:06.012574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:22.704 [2024-11-16 18:49:06.012643] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:22.704 [2024-11-16 18:49:06.012685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:22.704 [2024-11-16 18:49:06.012805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.704 [2024-11-16 18:49:06.012823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:22.704 [2024-11-16 18:49:06.013063] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:22.704 [2024-11-16 18:49:06.013225] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.704 [2024-11-16 18:49:06.013237] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:22.704 [2024-11-16 18:49:06.013372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.704 pt3 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.704 "name": "raid_bdev1", 00:08:22.704 "uuid": "e03c8da8-554b-4641-a3a2-cdc4edd81eb2", 00:08:22.704 "strip_size_kb": 64, 00:08:22.704 "state": "online", 00:08:22.704 "raid_level": "raid0", 00:08:22.704 "superblock": true, 00:08:22.704 "num_base_bdevs": 3, 00:08:22.704 "num_base_bdevs_discovered": 3, 00:08:22.704 "num_base_bdevs_operational": 3, 00:08:22.704 "base_bdevs_list": [ 00:08:22.704 { 00:08:22.704 "name": "pt1", 00:08:22.704 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:22.704 "is_configured": true, 00:08:22.704 "data_offset": 2048, 00:08:22.704 "data_size": 63488 00:08:22.704 }, 00:08:22.704 { 00:08:22.704 "name": "pt2", 00:08:22.704 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:22.704 "is_configured": true, 00:08:22.704 "data_offset": 2048, 00:08:22.704 "data_size": 63488 00:08:22.704 }, 00:08:22.704 { 00:08:22.704 "name": "pt3", 00:08:22.704 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:22.704 "is_configured": true, 00:08:22.704 "data_offset": 2048, 00:08:22.704 "data_size": 63488 00:08:22.704 } 00:08:22.704 ] 00:08:22.704 }' 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.704 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.972 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:22.972 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:22.972 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:22.972 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:22.972 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:22.972 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:22.972 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:22.972 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:22.972 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.255 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.255 [2024-11-16 18:49:06.443670] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.255 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.255 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.255 "name": "raid_bdev1", 00:08:23.255 "aliases": [ 00:08:23.255 "e03c8da8-554b-4641-a3a2-cdc4edd81eb2" 00:08:23.255 ], 00:08:23.255 "product_name": "Raid Volume", 00:08:23.255 "block_size": 512, 00:08:23.255 "num_blocks": 190464, 00:08:23.255 "uuid": "e03c8da8-554b-4641-a3a2-cdc4edd81eb2", 00:08:23.255 "assigned_rate_limits": { 00:08:23.255 "rw_ios_per_sec": 0, 00:08:23.255 "rw_mbytes_per_sec": 0, 00:08:23.255 "r_mbytes_per_sec": 0, 00:08:23.255 "w_mbytes_per_sec": 0 00:08:23.255 }, 00:08:23.255 "claimed": false, 00:08:23.255 "zoned": false, 00:08:23.255 "supported_io_types": { 00:08:23.255 "read": true, 00:08:23.255 "write": true, 00:08:23.255 "unmap": true, 00:08:23.255 "flush": true, 00:08:23.255 "reset": true, 00:08:23.255 "nvme_admin": false, 00:08:23.255 "nvme_io": false, 00:08:23.255 "nvme_io_md": false, 00:08:23.255 "write_zeroes": true, 00:08:23.255 "zcopy": false, 00:08:23.255 "get_zone_info": false, 00:08:23.255 "zone_management": false, 00:08:23.255 "zone_append": false, 00:08:23.255 "compare": false, 00:08:23.255 "compare_and_write": false, 00:08:23.255 "abort": false, 00:08:23.255 "seek_hole": false, 00:08:23.255 "seek_data": false, 00:08:23.255 "copy": false, 00:08:23.255 "nvme_iov_md": false 00:08:23.255 }, 00:08:23.255 "memory_domains": [ 00:08:23.255 { 00:08:23.255 "dma_device_id": "system", 00:08:23.255 "dma_device_type": 1 00:08:23.255 }, 00:08:23.255 { 00:08:23.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.255 "dma_device_type": 2 00:08:23.255 }, 00:08:23.255 { 00:08:23.255 "dma_device_id": "system", 00:08:23.255 "dma_device_type": 1 00:08:23.255 }, 00:08:23.255 { 00:08:23.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.255 "dma_device_type": 2 00:08:23.255 }, 00:08:23.255 { 00:08:23.255 "dma_device_id": "system", 00:08:23.255 "dma_device_type": 1 00:08:23.255 }, 00:08:23.255 { 00:08:23.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.255 "dma_device_type": 2 00:08:23.255 } 00:08:23.255 ], 00:08:23.255 "driver_specific": { 00:08:23.255 "raid": { 00:08:23.255 "uuid": "e03c8da8-554b-4641-a3a2-cdc4edd81eb2", 00:08:23.255 "strip_size_kb": 64, 00:08:23.255 "state": "online", 00:08:23.255 "raid_level": "raid0", 00:08:23.255 "superblock": true, 00:08:23.255 "num_base_bdevs": 3, 00:08:23.255 "num_base_bdevs_discovered": 3, 00:08:23.255 "num_base_bdevs_operational": 3, 00:08:23.255 "base_bdevs_list": [ 00:08:23.255 { 00:08:23.255 "name": "pt1", 00:08:23.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:23.255 "is_configured": true, 00:08:23.255 "data_offset": 2048, 00:08:23.255 "data_size": 63488 00:08:23.255 }, 00:08:23.255 { 00:08:23.255 "name": "pt2", 00:08:23.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:23.255 "is_configured": true, 00:08:23.255 "data_offset": 2048, 00:08:23.255 "data_size": 63488 00:08:23.255 }, 00:08:23.255 { 00:08:23.255 "name": "pt3", 00:08:23.255 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:23.255 "is_configured": true, 00:08:23.255 "data_offset": 2048, 00:08:23.255 "data_size": 63488 00:08:23.255 } 00:08:23.255 ] 00:08:23.255 } 00:08:23.255 } 00:08:23.255 }' 00:08:23.255 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.255 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:23.255 pt2 00:08:23.255 pt3' 00:08:23.255 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.255 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.255 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.255 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.256 [2024-11-16 18:49:06.707121] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.256 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e03c8da8-554b-4641-a3a2-cdc4edd81eb2 '!=' e03c8da8-554b-4641-a3a2-cdc4edd81eb2 ']' 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64894 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64894 ']' 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64894 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64894 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64894' 00:08:23.516 killing process with pid 64894 00:08:23.516 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64894 00:08:23.516 [2024-11-16 18:49:06.768883] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:23.516 [2024-11-16 18:49:06.768978] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.516 [2024-11-16 18:49:06.769038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:23.516 [2024-11-16 18:49:06.769052] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:23.517 18:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64894 00:08:23.776 [2024-11-16 18:49:07.050624] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:24.716 18:49:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:24.716 00:08:24.716 real 0m4.936s 00:08:24.716 user 0m7.089s 00:08:24.716 sys 0m0.828s 00:08:24.716 18:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.716 18:49:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 ************************************ 00:08:24.716 END TEST raid_superblock_test 00:08:24.716 ************************************ 00:08:24.716 18:49:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:08:24.716 18:49:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:24.716 18:49:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.716 18:49:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:24.716 ************************************ 00:08:24.716 START TEST raid_read_error_test 00:08:24.716 ************************************ 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.X8VzeIIcaV 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65147 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65147 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65147 ']' 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.716 18:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.976 [2024-11-16 18:49:08.265590] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:24.976 [2024-11-16 18:49:08.265731] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65147 ] 00:08:24.976 [2024-11-16 18:49:08.441435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.236 [2024-11-16 18:49:08.547170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.496 [2024-11-16 18:49:08.737327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.497 [2024-11-16 18:49:08.737389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 BaseBdev1_malloc 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 true 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 [2024-11-16 18:49:09.141630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:25.757 [2024-11-16 18:49:09.141692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.757 [2024-11-16 18:49:09.141710] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:25.757 [2024-11-16 18:49:09.141736] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.757 [2024-11-16 18:49:09.143748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.757 [2024-11-16 18:49:09.143795] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:25.757 BaseBdev1 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 BaseBdev2_malloc 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 true 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.757 [2024-11-16 18:49:09.205560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:25.757 [2024-11-16 18:49:09.205626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:25.757 [2024-11-16 18:49:09.205641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:25.757 [2024-11-16 18:49:09.205651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:25.757 [2024-11-16 18:49:09.207641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:25.757 [2024-11-16 18:49:09.207685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:25.757 BaseBdev2 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.757 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.017 BaseBdev3_malloc 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.017 true 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.017 [2024-11-16 18:49:09.283435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:26.017 [2024-11-16 18:49:09.283482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.017 [2024-11-16 18:49:09.283498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:26.017 [2024-11-16 18:49:09.283508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.017 [2024-11-16 18:49:09.285573] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.017 [2024-11-16 18:49:09.285613] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:26.017 BaseBdev3 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.017 [2024-11-16 18:49:09.295486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:26.017 [2024-11-16 18:49:09.297207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.017 [2024-11-16 18:49:09.297288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:26.017 [2024-11-16 18:49:09.297474] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:26.017 [2024-11-16 18:49:09.297492] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:26.017 [2024-11-16 18:49:09.297732] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:26.017 [2024-11-16 18:49:09.297889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:26.017 [2024-11-16 18:49:09.297913] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:26.017 [2024-11-16 18:49:09.298038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.017 "name": "raid_bdev1", 00:08:26.017 "uuid": "84521467-6dd7-4d4d-8a9f-5b82c20da757", 00:08:26.017 "strip_size_kb": 64, 00:08:26.017 "state": "online", 00:08:26.017 "raid_level": "raid0", 00:08:26.017 "superblock": true, 00:08:26.017 "num_base_bdevs": 3, 00:08:26.017 "num_base_bdevs_discovered": 3, 00:08:26.017 "num_base_bdevs_operational": 3, 00:08:26.017 "base_bdevs_list": [ 00:08:26.017 { 00:08:26.017 "name": "BaseBdev1", 00:08:26.017 "uuid": "37c158b5-4e8a-5923-aee0-da328ae3e0c1", 00:08:26.017 "is_configured": true, 00:08:26.017 "data_offset": 2048, 00:08:26.017 "data_size": 63488 00:08:26.017 }, 00:08:26.017 { 00:08:26.017 "name": "BaseBdev2", 00:08:26.017 "uuid": "15aa3386-ee40-5128-bd7a-2c8487456be7", 00:08:26.017 "is_configured": true, 00:08:26.017 "data_offset": 2048, 00:08:26.017 "data_size": 63488 00:08:26.017 }, 00:08:26.017 { 00:08:26.017 "name": "BaseBdev3", 00:08:26.017 "uuid": "99c7b87b-7556-52b5-8872-d08207ef7ffe", 00:08:26.017 "is_configured": true, 00:08:26.017 "data_offset": 2048, 00:08:26.017 "data_size": 63488 00:08:26.017 } 00:08:26.017 ] 00:08:26.017 }' 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.017 18:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.587 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:26.587 18:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:26.587 [2024-11-16 18:49:09.867820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:27.527 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:27.527 18:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.527 18:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.527 18:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.527 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:27.527 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:27.527 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.528 "name": "raid_bdev1", 00:08:27.528 "uuid": "84521467-6dd7-4d4d-8a9f-5b82c20da757", 00:08:27.528 "strip_size_kb": 64, 00:08:27.528 "state": "online", 00:08:27.528 "raid_level": "raid0", 00:08:27.528 "superblock": true, 00:08:27.528 "num_base_bdevs": 3, 00:08:27.528 "num_base_bdevs_discovered": 3, 00:08:27.528 "num_base_bdevs_operational": 3, 00:08:27.528 "base_bdevs_list": [ 00:08:27.528 { 00:08:27.528 "name": "BaseBdev1", 00:08:27.528 "uuid": "37c158b5-4e8a-5923-aee0-da328ae3e0c1", 00:08:27.528 "is_configured": true, 00:08:27.528 "data_offset": 2048, 00:08:27.528 "data_size": 63488 00:08:27.528 }, 00:08:27.528 { 00:08:27.528 "name": "BaseBdev2", 00:08:27.528 "uuid": "15aa3386-ee40-5128-bd7a-2c8487456be7", 00:08:27.528 "is_configured": true, 00:08:27.528 "data_offset": 2048, 00:08:27.528 "data_size": 63488 00:08:27.528 }, 00:08:27.528 { 00:08:27.528 "name": "BaseBdev3", 00:08:27.528 "uuid": "99c7b87b-7556-52b5-8872-d08207ef7ffe", 00:08:27.528 "is_configured": true, 00:08:27.528 "data_offset": 2048, 00:08:27.528 "data_size": 63488 00:08:27.528 } 00:08:27.528 ] 00:08:27.528 }' 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.528 18:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.788 [2024-11-16 18:49:11.221936] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.788 [2024-11-16 18:49:11.222028] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.788 [2024-11-16 18:49:11.224629] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.788 [2024-11-16 18:49:11.224730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.788 [2024-11-16 18:49:11.224787] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.788 [2024-11-16 18:49:11.224850] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:27.788 { 00:08:27.788 "results": [ 00:08:27.788 { 00:08:27.788 "job": "raid_bdev1", 00:08:27.788 "core_mask": "0x1", 00:08:27.788 "workload": "randrw", 00:08:27.788 "percentage": 50, 00:08:27.788 "status": "finished", 00:08:27.788 "queue_depth": 1, 00:08:27.788 "io_size": 131072, 00:08:27.788 "runtime": 1.355103, 00:08:27.788 "iops": 16682.126746084985, 00:08:27.788 "mibps": 2085.265843260623, 00:08:27.788 "io_failed": 1, 00:08:27.788 "io_timeout": 0, 00:08:27.788 "avg_latency_us": 83.2970680526938, 00:08:27.788 "min_latency_us": 24.482096069868994, 00:08:27.788 "max_latency_us": 1366.5257641921398 00:08:27.788 } 00:08:27.788 ], 00:08:27.788 "core_count": 1 00:08:27.788 } 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65147 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65147 ']' 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65147 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.788 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65147 00:08:28.049 killing process with pid 65147 00:08:28.049 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:28.049 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:28.049 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65147' 00:08:28.049 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65147 00:08:28.049 [2024-11-16 18:49:11.262191] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.049 18:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65147 00:08:28.049 [2024-11-16 18:49:11.483920] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.X8VzeIIcaV 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:29.431 ************************************ 00:08:29.431 END TEST raid_read_error_test 00:08:29.431 ************************************ 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:29.431 00:08:29.431 real 0m4.426s 00:08:29.431 user 0m5.289s 00:08:29.431 sys 0m0.549s 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.431 18:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.431 18:49:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:08:29.431 18:49:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:29.431 18:49:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.431 18:49:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:29.431 ************************************ 00:08:29.431 START TEST raid_write_error_test 00:08:29.431 ************************************ 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.byeA6NoA6D 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65293 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65293 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65293 ']' 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.431 18:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:29.431 [2024-11-16 18:49:12.762556] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:29.431 [2024-11-16 18:49:12.762776] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65293 ] 00:08:29.691 [2024-11-16 18:49:12.934324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.691 [2024-11-16 18:49:13.040177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.952 [2024-11-16 18:49:13.235826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.952 [2024-11-16 18:49:13.235920] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.212 BaseBdev1_malloc 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.212 true 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.212 [2024-11-16 18:49:13.642638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:30.212 [2024-11-16 18:49:13.642714] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.212 [2024-11-16 18:49:13.642735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:30.212 [2024-11-16 18:49:13.642746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.212 [2024-11-16 18:49:13.644793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.212 [2024-11-16 18:49:13.644899] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:30.212 BaseBdev1 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.212 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 BaseBdev2_malloc 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 true 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 [2024-11-16 18:49:13.707563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:30.473 [2024-11-16 18:49:13.707614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.473 [2024-11-16 18:49:13.707630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:30.473 [2024-11-16 18:49:13.707640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.473 [2024-11-16 18:49:13.709634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.473 [2024-11-16 18:49:13.709745] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:30.473 BaseBdev2 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 BaseBdev3_malloc 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 true 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 [2024-11-16 18:49:13.784611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:08:30.473 [2024-11-16 18:49:13.784674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:30.473 [2024-11-16 18:49:13.784692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:30.473 [2024-11-16 18:49:13.784718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:30.473 [2024-11-16 18:49:13.786766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:30.473 [2024-11-16 18:49:13.786801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:08:30.473 BaseBdev3 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 [2024-11-16 18:49:13.796667] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.473 [2024-11-16 18:49:13.798347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:30.473 [2024-11-16 18:49:13.798425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:30.473 [2024-11-16 18:49:13.798610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:30.473 [2024-11-16 18:49:13.798623] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:30.473 [2024-11-16 18:49:13.798859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:08:30.473 [2024-11-16 18:49:13.799019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:30.473 [2024-11-16 18:49:13.799037] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:30.473 [2024-11-16 18:49:13.799205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.473 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:30.473 "name": "raid_bdev1", 00:08:30.473 "uuid": "e983bc91-df7f-4af8-8bc1-15a9d2723e1c", 00:08:30.473 "strip_size_kb": 64, 00:08:30.473 "state": "online", 00:08:30.473 "raid_level": "raid0", 00:08:30.473 "superblock": true, 00:08:30.473 "num_base_bdevs": 3, 00:08:30.473 "num_base_bdevs_discovered": 3, 00:08:30.473 "num_base_bdevs_operational": 3, 00:08:30.473 "base_bdevs_list": [ 00:08:30.473 { 00:08:30.473 "name": "BaseBdev1", 00:08:30.473 "uuid": "4607b99e-0896-58e1-ab87-05a33d6c50fd", 00:08:30.474 "is_configured": true, 00:08:30.474 "data_offset": 2048, 00:08:30.474 "data_size": 63488 00:08:30.474 }, 00:08:30.474 { 00:08:30.474 "name": "BaseBdev2", 00:08:30.474 "uuid": "1e91be9e-09b3-5fdc-89f9-485e1a215493", 00:08:30.474 "is_configured": true, 00:08:30.474 "data_offset": 2048, 00:08:30.474 "data_size": 63488 00:08:30.474 }, 00:08:30.474 { 00:08:30.474 "name": "BaseBdev3", 00:08:30.474 "uuid": "cadcc1a0-356b-5a89-8a1a-48ffc8b3aa2c", 00:08:30.474 "is_configured": true, 00:08:30.474 "data_offset": 2048, 00:08:30.474 "data_size": 63488 00:08:30.474 } 00:08:30.474 ] 00:08:30.474 }' 00:08:30.474 18:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:30.474 18:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.043 18:49:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:31.043 18:49:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:31.043 [2024-11-16 18:49:14.348866] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.983 "name": "raid_bdev1", 00:08:31.983 "uuid": "e983bc91-df7f-4af8-8bc1-15a9d2723e1c", 00:08:31.983 "strip_size_kb": 64, 00:08:31.983 "state": "online", 00:08:31.983 "raid_level": "raid0", 00:08:31.983 "superblock": true, 00:08:31.983 "num_base_bdevs": 3, 00:08:31.983 "num_base_bdevs_discovered": 3, 00:08:31.983 "num_base_bdevs_operational": 3, 00:08:31.983 "base_bdevs_list": [ 00:08:31.983 { 00:08:31.983 "name": "BaseBdev1", 00:08:31.983 "uuid": "4607b99e-0896-58e1-ab87-05a33d6c50fd", 00:08:31.983 "is_configured": true, 00:08:31.983 "data_offset": 2048, 00:08:31.983 "data_size": 63488 00:08:31.983 }, 00:08:31.983 { 00:08:31.983 "name": "BaseBdev2", 00:08:31.983 "uuid": "1e91be9e-09b3-5fdc-89f9-485e1a215493", 00:08:31.983 "is_configured": true, 00:08:31.983 "data_offset": 2048, 00:08:31.983 "data_size": 63488 00:08:31.983 }, 00:08:31.983 { 00:08:31.983 "name": "BaseBdev3", 00:08:31.983 "uuid": "cadcc1a0-356b-5a89-8a1a-48ffc8b3aa2c", 00:08:31.983 "is_configured": true, 00:08:31.983 "data_offset": 2048, 00:08:31.983 "data_size": 63488 00:08:31.983 } 00:08:31.983 ] 00:08:31.983 }' 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.983 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.243 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:32.243 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.243 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.503 [2024-11-16 18:49:15.716338] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:32.503 [2024-11-16 18:49:15.716431] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:32.503 [2024-11-16 18:49:15.719046] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:32.503 [2024-11-16 18:49:15.719129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.503 [2024-11-16 18:49:15.719186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:32.503 [2024-11-16 18:49:15.719224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:32.503 { 00:08:32.503 "results": [ 00:08:32.503 { 00:08:32.503 "job": "raid_bdev1", 00:08:32.503 "core_mask": "0x1", 00:08:32.503 "workload": "randrw", 00:08:32.503 "percentage": 50, 00:08:32.503 "status": "finished", 00:08:32.503 "queue_depth": 1, 00:08:32.503 "io_size": 131072, 00:08:32.503 "runtime": 1.368481, 00:08:32.503 "iops": 16215.78962367764, 00:08:32.503 "mibps": 2026.973702959705, 00:08:32.503 "io_failed": 1, 00:08:32.503 "io_timeout": 0, 00:08:32.503 "avg_latency_us": 85.77177597340244, 00:08:32.503 "min_latency_us": 18.78078602620087, 00:08:32.503 "max_latency_us": 1380.8349344978167 00:08:32.503 } 00:08:32.503 ], 00:08:32.503 "core_count": 1 00:08:32.503 } 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65293 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65293 ']' 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65293 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65293 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:32.503 killing process with pid 65293 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65293' 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65293 00:08:32.503 [2024-11-16 18:49:15.762314] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:32.503 18:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65293 00:08:32.764 [2024-11-16 18:49:15.979146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.byeA6NoA6D 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:33.704 ************************************ 00:08:33.704 END TEST raid_write_error_test 00:08:33.704 ************************************ 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:33.704 00:08:33.704 real 0m4.417s 00:08:33.704 user 0m5.285s 00:08:33.704 sys 0m0.548s 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:33.704 18:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.704 18:49:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:33.704 18:49:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:08:33.704 18:49:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:33.704 18:49:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:33.704 18:49:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:33.704 ************************************ 00:08:33.704 START TEST raid_state_function_test 00:08:33.704 ************************************ 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:33.704 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:33.704 Process raid pid: 65431 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65431 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65431' 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65431 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65431 ']' 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.705 18:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.964 [2024-11-16 18:49:17.247372] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:33.964 [2024-11-16 18:49:17.247577] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.964 [2024-11-16 18:49:17.420925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.223 [2024-11-16 18:49:17.532141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.482 [2024-11-16 18:49:17.736803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.482 [2024-11-16 18:49:17.736888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.742 [2024-11-16 18:49:18.067363] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:34.742 [2024-11-16 18:49:18.067463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:34.742 [2024-11-16 18:49:18.067510] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:34.742 [2024-11-16 18:49:18.067535] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:34.742 [2024-11-16 18:49:18.067553] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:34.742 [2024-11-16 18:49:18.067574] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.742 "name": "Existed_Raid", 00:08:34.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.742 "strip_size_kb": 64, 00:08:34.742 "state": "configuring", 00:08:34.742 "raid_level": "concat", 00:08:34.742 "superblock": false, 00:08:34.742 "num_base_bdevs": 3, 00:08:34.742 "num_base_bdevs_discovered": 0, 00:08:34.742 "num_base_bdevs_operational": 3, 00:08:34.742 "base_bdevs_list": [ 00:08:34.742 { 00:08:34.742 "name": "BaseBdev1", 00:08:34.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.742 "is_configured": false, 00:08:34.742 "data_offset": 0, 00:08:34.742 "data_size": 0 00:08:34.742 }, 00:08:34.742 { 00:08:34.742 "name": "BaseBdev2", 00:08:34.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.742 "is_configured": false, 00:08:34.742 "data_offset": 0, 00:08:34.742 "data_size": 0 00:08:34.742 }, 00:08:34.742 { 00:08:34.742 "name": "BaseBdev3", 00:08:34.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.742 "is_configured": false, 00:08:34.742 "data_offset": 0, 00:08:34.742 "data_size": 0 00:08:34.742 } 00:08:34.742 ] 00:08:34.742 }' 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.742 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.311 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.311 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.311 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.311 [2024-11-16 18:49:18.518550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.311 [2024-11-16 18:49:18.518585] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:35.311 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 [2024-11-16 18:49:18.530527] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:35.312 [2024-11-16 18:49:18.530612] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:35.312 [2024-11-16 18:49:18.530639] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.312 [2024-11-16 18:49:18.530693] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.312 [2024-11-16 18:49:18.530714] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.312 [2024-11-16 18:49:18.530735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 [2024-11-16 18:49:18.579260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.312 BaseBdev1 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 [ 00:08:35.312 { 00:08:35.312 "name": "BaseBdev1", 00:08:35.312 "aliases": [ 00:08:35.312 "3c18fcd8-00c0-4ad5-9ebf-abe8100fa7fb" 00:08:35.312 ], 00:08:35.312 "product_name": "Malloc disk", 00:08:35.312 "block_size": 512, 00:08:35.312 "num_blocks": 65536, 00:08:35.312 "uuid": "3c18fcd8-00c0-4ad5-9ebf-abe8100fa7fb", 00:08:35.312 "assigned_rate_limits": { 00:08:35.312 "rw_ios_per_sec": 0, 00:08:35.312 "rw_mbytes_per_sec": 0, 00:08:35.312 "r_mbytes_per_sec": 0, 00:08:35.312 "w_mbytes_per_sec": 0 00:08:35.312 }, 00:08:35.312 "claimed": true, 00:08:35.312 "claim_type": "exclusive_write", 00:08:35.312 "zoned": false, 00:08:35.312 "supported_io_types": { 00:08:35.312 "read": true, 00:08:35.312 "write": true, 00:08:35.312 "unmap": true, 00:08:35.312 "flush": true, 00:08:35.312 "reset": true, 00:08:35.312 "nvme_admin": false, 00:08:35.312 "nvme_io": false, 00:08:35.312 "nvme_io_md": false, 00:08:35.312 "write_zeroes": true, 00:08:35.312 "zcopy": true, 00:08:35.312 "get_zone_info": false, 00:08:35.312 "zone_management": false, 00:08:35.312 "zone_append": false, 00:08:35.312 "compare": false, 00:08:35.312 "compare_and_write": false, 00:08:35.312 "abort": true, 00:08:35.312 "seek_hole": false, 00:08:35.312 "seek_data": false, 00:08:35.312 "copy": true, 00:08:35.312 "nvme_iov_md": false 00:08:35.312 }, 00:08:35.312 "memory_domains": [ 00:08:35.312 { 00:08:35.312 "dma_device_id": "system", 00:08:35.312 "dma_device_type": 1 00:08:35.312 }, 00:08:35.312 { 00:08:35.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.312 "dma_device_type": 2 00:08:35.312 } 00:08:35.312 ], 00:08:35.312 "driver_specific": {} 00:08:35.312 } 00:08:35.312 ] 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.312 "name": "Existed_Raid", 00:08:35.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.312 "strip_size_kb": 64, 00:08:35.312 "state": "configuring", 00:08:35.312 "raid_level": "concat", 00:08:35.312 "superblock": false, 00:08:35.312 "num_base_bdevs": 3, 00:08:35.312 "num_base_bdevs_discovered": 1, 00:08:35.312 "num_base_bdevs_operational": 3, 00:08:35.312 "base_bdevs_list": [ 00:08:35.312 { 00:08:35.312 "name": "BaseBdev1", 00:08:35.312 "uuid": "3c18fcd8-00c0-4ad5-9ebf-abe8100fa7fb", 00:08:35.312 "is_configured": true, 00:08:35.312 "data_offset": 0, 00:08:35.312 "data_size": 65536 00:08:35.312 }, 00:08:35.312 { 00:08:35.312 "name": "BaseBdev2", 00:08:35.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.312 "is_configured": false, 00:08:35.312 "data_offset": 0, 00:08:35.312 "data_size": 0 00:08:35.312 }, 00:08:35.312 { 00:08:35.312 "name": "BaseBdev3", 00:08:35.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.312 "is_configured": false, 00:08:35.312 "data_offset": 0, 00:08:35.312 "data_size": 0 00:08:35.312 } 00:08:35.312 ] 00:08:35.312 }' 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.312 18:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.572 [2024-11-16 18:49:19.010549] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:35.572 [2024-11-16 18:49:19.010598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.572 [2024-11-16 18:49:19.018570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:35.572 [2024-11-16 18:49:19.020458] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:35.572 [2024-11-16 18:49:19.020503] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:35.572 [2024-11-16 18:49:19.020513] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:35.572 [2024-11-16 18:49:19.020522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:35.572 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.831 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.831 "name": "Existed_Raid", 00:08:35.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.831 "strip_size_kb": 64, 00:08:35.831 "state": "configuring", 00:08:35.831 "raid_level": "concat", 00:08:35.831 "superblock": false, 00:08:35.831 "num_base_bdevs": 3, 00:08:35.831 "num_base_bdevs_discovered": 1, 00:08:35.831 "num_base_bdevs_operational": 3, 00:08:35.831 "base_bdevs_list": [ 00:08:35.831 { 00:08:35.831 "name": "BaseBdev1", 00:08:35.831 "uuid": "3c18fcd8-00c0-4ad5-9ebf-abe8100fa7fb", 00:08:35.831 "is_configured": true, 00:08:35.831 "data_offset": 0, 00:08:35.831 "data_size": 65536 00:08:35.831 }, 00:08:35.831 { 00:08:35.831 "name": "BaseBdev2", 00:08:35.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.831 "is_configured": false, 00:08:35.831 "data_offset": 0, 00:08:35.831 "data_size": 0 00:08:35.831 }, 00:08:35.831 { 00:08:35.831 "name": "BaseBdev3", 00:08:35.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.831 "is_configured": false, 00:08:35.831 "data_offset": 0, 00:08:35.831 "data_size": 0 00:08:35.831 } 00:08:35.831 ] 00:08:35.831 }' 00:08:35.831 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.831 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.091 [2024-11-16 18:49:19.518494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.091 BaseBdev2 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.091 [ 00:08:36.091 { 00:08:36.091 "name": "BaseBdev2", 00:08:36.091 "aliases": [ 00:08:36.091 "43826398-8988-4a92-9d98-77ddc4f5e60b" 00:08:36.091 ], 00:08:36.091 "product_name": "Malloc disk", 00:08:36.091 "block_size": 512, 00:08:36.091 "num_blocks": 65536, 00:08:36.091 "uuid": "43826398-8988-4a92-9d98-77ddc4f5e60b", 00:08:36.091 "assigned_rate_limits": { 00:08:36.091 "rw_ios_per_sec": 0, 00:08:36.091 "rw_mbytes_per_sec": 0, 00:08:36.091 "r_mbytes_per_sec": 0, 00:08:36.091 "w_mbytes_per_sec": 0 00:08:36.091 }, 00:08:36.091 "claimed": true, 00:08:36.091 "claim_type": "exclusive_write", 00:08:36.091 "zoned": false, 00:08:36.091 "supported_io_types": { 00:08:36.091 "read": true, 00:08:36.091 "write": true, 00:08:36.091 "unmap": true, 00:08:36.091 "flush": true, 00:08:36.091 "reset": true, 00:08:36.091 "nvme_admin": false, 00:08:36.091 "nvme_io": false, 00:08:36.091 "nvme_io_md": false, 00:08:36.091 "write_zeroes": true, 00:08:36.091 "zcopy": true, 00:08:36.091 "get_zone_info": false, 00:08:36.091 "zone_management": false, 00:08:36.091 "zone_append": false, 00:08:36.091 "compare": false, 00:08:36.091 "compare_and_write": false, 00:08:36.091 "abort": true, 00:08:36.091 "seek_hole": false, 00:08:36.091 "seek_data": false, 00:08:36.091 "copy": true, 00:08:36.091 "nvme_iov_md": false 00:08:36.091 }, 00:08:36.091 "memory_domains": [ 00:08:36.091 { 00:08:36.091 "dma_device_id": "system", 00:08:36.091 "dma_device_type": 1 00:08:36.091 }, 00:08:36.091 { 00:08:36.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.091 "dma_device_type": 2 00:08:36.091 } 00:08:36.091 ], 00:08:36.091 "driver_specific": {} 00:08:36.091 } 00:08:36.091 ] 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.091 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.351 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.351 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.351 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.351 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.351 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.351 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.351 "name": "Existed_Raid", 00:08:36.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.351 "strip_size_kb": 64, 00:08:36.351 "state": "configuring", 00:08:36.351 "raid_level": "concat", 00:08:36.351 "superblock": false, 00:08:36.351 "num_base_bdevs": 3, 00:08:36.351 "num_base_bdevs_discovered": 2, 00:08:36.351 "num_base_bdevs_operational": 3, 00:08:36.351 "base_bdevs_list": [ 00:08:36.351 { 00:08:36.351 "name": "BaseBdev1", 00:08:36.351 "uuid": "3c18fcd8-00c0-4ad5-9ebf-abe8100fa7fb", 00:08:36.351 "is_configured": true, 00:08:36.351 "data_offset": 0, 00:08:36.351 "data_size": 65536 00:08:36.351 }, 00:08:36.351 { 00:08:36.351 "name": "BaseBdev2", 00:08:36.351 "uuid": "43826398-8988-4a92-9d98-77ddc4f5e60b", 00:08:36.351 "is_configured": true, 00:08:36.351 "data_offset": 0, 00:08:36.351 "data_size": 65536 00:08:36.351 }, 00:08:36.351 { 00:08:36.351 "name": "BaseBdev3", 00:08:36.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.351 "is_configured": false, 00:08:36.351 "data_offset": 0, 00:08:36.351 "data_size": 0 00:08:36.351 } 00:08:36.351 ] 00:08:36.351 }' 00:08:36.351 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.351 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.611 18:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:36.611 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.611 18:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.611 [2024-11-16 18:49:20.050400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:36.611 [2024-11-16 18:49:20.050527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:36.611 [2024-11-16 18:49:20.050558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:36.611 [2024-11-16 18:49:20.050895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:36.611 [2024-11-16 18:49:20.051109] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:36.611 [2024-11-16 18:49:20.051152] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:36.611 [2024-11-16 18:49:20.051443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.611 BaseBdev3 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.611 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.611 [ 00:08:36.611 { 00:08:36.611 "name": "BaseBdev3", 00:08:36.611 "aliases": [ 00:08:36.611 "9ee6dda3-84e0-4829-bb3a-929d1b626674" 00:08:36.611 ], 00:08:36.611 "product_name": "Malloc disk", 00:08:36.611 "block_size": 512, 00:08:36.611 "num_blocks": 65536, 00:08:36.611 "uuid": "9ee6dda3-84e0-4829-bb3a-929d1b626674", 00:08:36.611 "assigned_rate_limits": { 00:08:36.611 "rw_ios_per_sec": 0, 00:08:36.611 "rw_mbytes_per_sec": 0, 00:08:36.611 "r_mbytes_per_sec": 0, 00:08:36.611 "w_mbytes_per_sec": 0 00:08:36.611 }, 00:08:36.611 "claimed": true, 00:08:36.611 "claim_type": "exclusive_write", 00:08:36.871 "zoned": false, 00:08:36.871 "supported_io_types": { 00:08:36.871 "read": true, 00:08:36.871 "write": true, 00:08:36.871 "unmap": true, 00:08:36.871 "flush": true, 00:08:36.871 "reset": true, 00:08:36.871 "nvme_admin": false, 00:08:36.871 "nvme_io": false, 00:08:36.871 "nvme_io_md": false, 00:08:36.871 "write_zeroes": true, 00:08:36.871 "zcopy": true, 00:08:36.871 "get_zone_info": false, 00:08:36.871 "zone_management": false, 00:08:36.871 "zone_append": false, 00:08:36.871 "compare": false, 00:08:36.871 "compare_and_write": false, 00:08:36.871 "abort": true, 00:08:36.871 "seek_hole": false, 00:08:36.871 "seek_data": false, 00:08:36.871 "copy": true, 00:08:36.871 "nvme_iov_md": false 00:08:36.871 }, 00:08:36.871 "memory_domains": [ 00:08:36.871 { 00:08:36.871 "dma_device_id": "system", 00:08:36.871 "dma_device_type": 1 00:08:36.871 }, 00:08:36.871 { 00:08:36.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.871 "dma_device_type": 2 00:08:36.871 } 00:08:36.871 ], 00:08:36.871 "driver_specific": {} 00:08:36.871 } 00:08:36.871 ] 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.871 "name": "Existed_Raid", 00:08:36.871 "uuid": "29ab3f4d-526e-4269-9fb7-24eb4c8a040e", 00:08:36.871 "strip_size_kb": 64, 00:08:36.871 "state": "online", 00:08:36.871 "raid_level": "concat", 00:08:36.871 "superblock": false, 00:08:36.871 "num_base_bdevs": 3, 00:08:36.871 "num_base_bdevs_discovered": 3, 00:08:36.871 "num_base_bdevs_operational": 3, 00:08:36.871 "base_bdevs_list": [ 00:08:36.871 { 00:08:36.871 "name": "BaseBdev1", 00:08:36.871 "uuid": "3c18fcd8-00c0-4ad5-9ebf-abe8100fa7fb", 00:08:36.871 "is_configured": true, 00:08:36.871 "data_offset": 0, 00:08:36.871 "data_size": 65536 00:08:36.871 }, 00:08:36.871 { 00:08:36.871 "name": "BaseBdev2", 00:08:36.871 "uuid": "43826398-8988-4a92-9d98-77ddc4f5e60b", 00:08:36.871 "is_configured": true, 00:08:36.871 "data_offset": 0, 00:08:36.871 "data_size": 65536 00:08:36.871 }, 00:08:36.871 { 00:08:36.871 "name": "BaseBdev3", 00:08:36.871 "uuid": "9ee6dda3-84e0-4829-bb3a-929d1b626674", 00:08:36.871 "is_configured": true, 00:08:36.871 "data_offset": 0, 00:08:36.871 "data_size": 65536 00:08:36.871 } 00:08:36.871 ] 00:08:36.871 }' 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.871 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.131 [2024-11-16 18:49:20.533926] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.131 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:37.131 "name": "Existed_Raid", 00:08:37.131 "aliases": [ 00:08:37.131 "29ab3f4d-526e-4269-9fb7-24eb4c8a040e" 00:08:37.131 ], 00:08:37.131 "product_name": "Raid Volume", 00:08:37.131 "block_size": 512, 00:08:37.131 "num_blocks": 196608, 00:08:37.131 "uuid": "29ab3f4d-526e-4269-9fb7-24eb4c8a040e", 00:08:37.131 "assigned_rate_limits": { 00:08:37.131 "rw_ios_per_sec": 0, 00:08:37.131 "rw_mbytes_per_sec": 0, 00:08:37.131 "r_mbytes_per_sec": 0, 00:08:37.131 "w_mbytes_per_sec": 0 00:08:37.131 }, 00:08:37.131 "claimed": false, 00:08:37.131 "zoned": false, 00:08:37.131 "supported_io_types": { 00:08:37.131 "read": true, 00:08:37.131 "write": true, 00:08:37.131 "unmap": true, 00:08:37.131 "flush": true, 00:08:37.131 "reset": true, 00:08:37.131 "nvme_admin": false, 00:08:37.131 "nvme_io": false, 00:08:37.131 "nvme_io_md": false, 00:08:37.131 "write_zeroes": true, 00:08:37.131 "zcopy": false, 00:08:37.131 "get_zone_info": false, 00:08:37.131 "zone_management": false, 00:08:37.131 "zone_append": false, 00:08:37.131 "compare": false, 00:08:37.131 "compare_and_write": false, 00:08:37.131 "abort": false, 00:08:37.131 "seek_hole": false, 00:08:37.131 "seek_data": false, 00:08:37.131 "copy": false, 00:08:37.131 "nvme_iov_md": false 00:08:37.131 }, 00:08:37.131 "memory_domains": [ 00:08:37.131 { 00:08:37.131 "dma_device_id": "system", 00:08:37.131 "dma_device_type": 1 00:08:37.131 }, 00:08:37.131 { 00:08:37.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.131 "dma_device_type": 2 00:08:37.131 }, 00:08:37.131 { 00:08:37.131 "dma_device_id": "system", 00:08:37.131 "dma_device_type": 1 00:08:37.131 }, 00:08:37.131 { 00:08:37.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.131 "dma_device_type": 2 00:08:37.131 }, 00:08:37.131 { 00:08:37.131 "dma_device_id": "system", 00:08:37.131 "dma_device_type": 1 00:08:37.131 }, 00:08:37.131 { 00:08:37.131 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.131 "dma_device_type": 2 00:08:37.131 } 00:08:37.131 ], 00:08:37.131 "driver_specific": { 00:08:37.131 "raid": { 00:08:37.131 "uuid": "29ab3f4d-526e-4269-9fb7-24eb4c8a040e", 00:08:37.132 "strip_size_kb": 64, 00:08:37.132 "state": "online", 00:08:37.132 "raid_level": "concat", 00:08:37.132 "superblock": false, 00:08:37.132 "num_base_bdevs": 3, 00:08:37.132 "num_base_bdevs_discovered": 3, 00:08:37.132 "num_base_bdevs_operational": 3, 00:08:37.132 "base_bdevs_list": [ 00:08:37.132 { 00:08:37.132 "name": "BaseBdev1", 00:08:37.132 "uuid": "3c18fcd8-00c0-4ad5-9ebf-abe8100fa7fb", 00:08:37.132 "is_configured": true, 00:08:37.132 "data_offset": 0, 00:08:37.132 "data_size": 65536 00:08:37.132 }, 00:08:37.132 { 00:08:37.132 "name": "BaseBdev2", 00:08:37.132 "uuid": "43826398-8988-4a92-9d98-77ddc4f5e60b", 00:08:37.132 "is_configured": true, 00:08:37.132 "data_offset": 0, 00:08:37.132 "data_size": 65536 00:08:37.132 }, 00:08:37.132 { 00:08:37.132 "name": "BaseBdev3", 00:08:37.132 "uuid": "9ee6dda3-84e0-4829-bb3a-929d1b626674", 00:08:37.132 "is_configured": true, 00:08:37.132 "data_offset": 0, 00:08:37.132 "data_size": 65536 00:08:37.132 } 00:08:37.132 ] 00:08:37.132 } 00:08:37.132 } 00:08:37.132 }' 00:08:37.132 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:37.391 BaseBdev2 00:08:37.391 BaseBdev3' 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:37.391 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.392 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.392 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:37.392 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.392 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:37.392 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:37.392 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:37.392 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.392 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.392 [2024-11-16 18:49:20.821187] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:37.392 [2024-11-16 18:49:20.821213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.392 [2024-11-16 18:49:20.821264] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.651 "name": "Existed_Raid", 00:08:37.651 "uuid": "29ab3f4d-526e-4269-9fb7-24eb4c8a040e", 00:08:37.651 "strip_size_kb": 64, 00:08:37.651 "state": "offline", 00:08:37.651 "raid_level": "concat", 00:08:37.651 "superblock": false, 00:08:37.651 "num_base_bdevs": 3, 00:08:37.651 "num_base_bdevs_discovered": 2, 00:08:37.651 "num_base_bdevs_operational": 2, 00:08:37.651 "base_bdevs_list": [ 00:08:37.651 { 00:08:37.651 "name": null, 00:08:37.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.651 "is_configured": false, 00:08:37.651 "data_offset": 0, 00:08:37.651 "data_size": 65536 00:08:37.651 }, 00:08:37.651 { 00:08:37.651 "name": "BaseBdev2", 00:08:37.651 "uuid": "43826398-8988-4a92-9d98-77ddc4f5e60b", 00:08:37.651 "is_configured": true, 00:08:37.651 "data_offset": 0, 00:08:37.651 "data_size": 65536 00:08:37.651 }, 00:08:37.651 { 00:08:37.651 "name": "BaseBdev3", 00:08:37.651 "uuid": "9ee6dda3-84e0-4829-bb3a-929d1b626674", 00:08:37.651 "is_configured": true, 00:08:37.651 "data_offset": 0, 00:08:37.651 "data_size": 65536 00:08:37.651 } 00:08:37.651 ] 00:08:37.651 }' 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.651 18:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.911 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:37.911 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:37.911 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:37.911 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.911 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.911 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.911 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.171 [2024-11-16 18:49:21.397213] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.171 [2024-11-16 18:49:21.539882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:38.171 [2024-11-16 18:49:21.539979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.171 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.431 BaseBdev2 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.431 [ 00:08:38.431 { 00:08:38.431 "name": "BaseBdev2", 00:08:38.431 "aliases": [ 00:08:38.431 "ef0afb83-7053-452b-a0e4-c231b357e5bb" 00:08:38.431 ], 00:08:38.431 "product_name": "Malloc disk", 00:08:38.431 "block_size": 512, 00:08:38.431 "num_blocks": 65536, 00:08:38.431 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:38.431 "assigned_rate_limits": { 00:08:38.431 "rw_ios_per_sec": 0, 00:08:38.431 "rw_mbytes_per_sec": 0, 00:08:38.431 "r_mbytes_per_sec": 0, 00:08:38.431 "w_mbytes_per_sec": 0 00:08:38.431 }, 00:08:38.431 "claimed": false, 00:08:38.431 "zoned": false, 00:08:38.431 "supported_io_types": { 00:08:38.431 "read": true, 00:08:38.431 "write": true, 00:08:38.431 "unmap": true, 00:08:38.431 "flush": true, 00:08:38.431 "reset": true, 00:08:38.431 "nvme_admin": false, 00:08:38.431 "nvme_io": false, 00:08:38.431 "nvme_io_md": false, 00:08:38.431 "write_zeroes": true, 00:08:38.431 "zcopy": true, 00:08:38.431 "get_zone_info": false, 00:08:38.431 "zone_management": false, 00:08:38.431 "zone_append": false, 00:08:38.431 "compare": false, 00:08:38.431 "compare_and_write": false, 00:08:38.431 "abort": true, 00:08:38.431 "seek_hole": false, 00:08:38.431 "seek_data": false, 00:08:38.431 "copy": true, 00:08:38.431 "nvme_iov_md": false 00:08:38.431 }, 00:08:38.431 "memory_domains": [ 00:08:38.431 { 00:08:38.431 "dma_device_id": "system", 00:08:38.431 "dma_device_type": 1 00:08:38.431 }, 00:08:38.431 { 00:08:38.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.431 "dma_device_type": 2 00:08:38.431 } 00:08:38.431 ], 00:08:38.431 "driver_specific": {} 00:08:38.431 } 00:08:38.431 ] 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.431 BaseBdev3 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.431 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 [ 00:08:38.432 { 00:08:38.432 "name": "BaseBdev3", 00:08:38.432 "aliases": [ 00:08:38.432 "0c95d683-ae23-436e-814e-0775b979be8b" 00:08:38.432 ], 00:08:38.432 "product_name": "Malloc disk", 00:08:38.432 "block_size": 512, 00:08:38.432 "num_blocks": 65536, 00:08:38.432 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:38.432 "assigned_rate_limits": { 00:08:38.432 "rw_ios_per_sec": 0, 00:08:38.432 "rw_mbytes_per_sec": 0, 00:08:38.432 "r_mbytes_per_sec": 0, 00:08:38.432 "w_mbytes_per_sec": 0 00:08:38.432 }, 00:08:38.432 "claimed": false, 00:08:38.432 "zoned": false, 00:08:38.432 "supported_io_types": { 00:08:38.432 "read": true, 00:08:38.432 "write": true, 00:08:38.432 "unmap": true, 00:08:38.432 "flush": true, 00:08:38.432 "reset": true, 00:08:38.432 "nvme_admin": false, 00:08:38.432 "nvme_io": false, 00:08:38.432 "nvme_io_md": false, 00:08:38.432 "write_zeroes": true, 00:08:38.432 "zcopy": true, 00:08:38.432 "get_zone_info": false, 00:08:38.432 "zone_management": false, 00:08:38.432 "zone_append": false, 00:08:38.432 "compare": false, 00:08:38.432 "compare_and_write": false, 00:08:38.432 "abort": true, 00:08:38.432 "seek_hole": false, 00:08:38.432 "seek_data": false, 00:08:38.432 "copy": true, 00:08:38.432 "nvme_iov_md": false 00:08:38.432 }, 00:08:38.432 "memory_domains": [ 00:08:38.432 { 00:08:38.432 "dma_device_id": "system", 00:08:38.432 "dma_device_type": 1 00:08:38.432 }, 00:08:38.432 { 00:08:38.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.432 "dma_device_type": 2 00:08:38.432 } 00:08:38.432 ], 00:08:38.432 "driver_specific": {} 00:08:38.432 } 00:08:38.432 ] 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 [2024-11-16 18:49:21.831896] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.432 [2024-11-16 18:49:21.831979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.432 [2024-11-16 18:49:21.832023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.432 [2024-11-16 18:49:21.833893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.432 "name": "Existed_Raid", 00:08:38.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.432 "strip_size_kb": 64, 00:08:38.432 "state": "configuring", 00:08:38.432 "raid_level": "concat", 00:08:38.432 "superblock": false, 00:08:38.432 "num_base_bdevs": 3, 00:08:38.432 "num_base_bdevs_discovered": 2, 00:08:38.432 "num_base_bdevs_operational": 3, 00:08:38.432 "base_bdevs_list": [ 00:08:38.432 { 00:08:38.432 "name": "BaseBdev1", 00:08:38.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.432 "is_configured": false, 00:08:38.432 "data_offset": 0, 00:08:38.432 "data_size": 0 00:08:38.432 }, 00:08:38.432 { 00:08:38.432 "name": "BaseBdev2", 00:08:38.432 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:38.432 "is_configured": true, 00:08:38.432 "data_offset": 0, 00:08:38.432 "data_size": 65536 00:08:38.432 }, 00:08:38.432 { 00:08:38.432 "name": "BaseBdev3", 00:08:38.432 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:38.432 "is_configured": true, 00:08:38.432 "data_offset": 0, 00:08:38.432 "data_size": 65536 00:08:38.432 } 00:08:38.432 ] 00:08:38.432 }' 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.432 18:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.088 [2024-11-16 18:49:22.255161] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.088 "name": "Existed_Raid", 00:08:39.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.088 "strip_size_kb": 64, 00:08:39.088 "state": "configuring", 00:08:39.088 "raid_level": "concat", 00:08:39.088 "superblock": false, 00:08:39.088 "num_base_bdevs": 3, 00:08:39.088 "num_base_bdevs_discovered": 1, 00:08:39.088 "num_base_bdevs_operational": 3, 00:08:39.088 "base_bdevs_list": [ 00:08:39.088 { 00:08:39.088 "name": "BaseBdev1", 00:08:39.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.088 "is_configured": false, 00:08:39.088 "data_offset": 0, 00:08:39.088 "data_size": 0 00:08:39.088 }, 00:08:39.088 { 00:08:39.088 "name": null, 00:08:39.088 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:39.088 "is_configured": false, 00:08:39.088 "data_offset": 0, 00:08:39.088 "data_size": 65536 00:08:39.088 }, 00:08:39.088 { 00:08:39.088 "name": "BaseBdev3", 00:08:39.088 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:39.088 "is_configured": true, 00:08:39.088 "data_offset": 0, 00:08:39.088 "data_size": 65536 00:08:39.088 } 00:08:39.088 ] 00:08:39.088 }' 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.088 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.347 [2024-11-16 18:49:22.723151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:39.347 BaseBdev1 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.347 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.347 [ 00:08:39.347 { 00:08:39.347 "name": "BaseBdev1", 00:08:39.347 "aliases": [ 00:08:39.347 "ff14679a-2424-40e6-98a2-ae59b6558f5a" 00:08:39.347 ], 00:08:39.347 "product_name": "Malloc disk", 00:08:39.347 "block_size": 512, 00:08:39.347 "num_blocks": 65536, 00:08:39.347 "uuid": "ff14679a-2424-40e6-98a2-ae59b6558f5a", 00:08:39.347 "assigned_rate_limits": { 00:08:39.347 "rw_ios_per_sec": 0, 00:08:39.347 "rw_mbytes_per_sec": 0, 00:08:39.347 "r_mbytes_per_sec": 0, 00:08:39.347 "w_mbytes_per_sec": 0 00:08:39.347 }, 00:08:39.347 "claimed": true, 00:08:39.347 "claim_type": "exclusive_write", 00:08:39.347 "zoned": false, 00:08:39.347 "supported_io_types": { 00:08:39.348 "read": true, 00:08:39.348 "write": true, 00:08:39.348 "unmap": true, 00:08:39.348 "flush": true, 00:08:39.348 "reset": true, 00:08:39.348 "nvme_admin": false, 00:08:39.348 "nvme_io": false, 00:08:39.348 "nvme_io_md": false, 00:08:39.348 "write_zeroes": true, 00:08:39.348 "zcopy": true, 00:08:39.348 "get_zone_info": false, 00:08:39.348 "zone_management": false, 00:08:39.348 "zone_append": false, 00:08:39.348 "compare": false, 00:08:39.348 "compare_and_write": false, 00:08:39.348 "abort": true, 00:08:39.348 "seek_hole": false, 00:08:39.348 "seek_data": false, 00:08:39.348 "copy": true, 00:08:39.348 "nvme_iov_md": false 00:08:39.348 }, 00:08:39.348 "memory_domains": [ 00:08:39.348 { 00:08:39.348 "dma_device_id": "system", 00:08:39.348 "dma_device_type": 1 00:08:39.348 }, 00:08:39.348 { 00:08:39.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.348 "dma_device_type": 2 00:08:39.348 } 00:08:39.348 ], 00:08:39.348 "driver_specific": {} 00:08:39.348 } 00:08:39.348 ] 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.348 "name": "Existed_Raid", 00:08:39.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.348 "strip_size_kb": 64, 00:08:39.348 "state": "configuring", 00:08:39.348 "raid_level": "concat", 00:08:39.348 "superblock": false, 00:08:39.348 "num_base_bdevs": 3, 00:08:39.348 "num_base_bdevs_discovered": 2, 00:08:39.348 "num_base_bdevs_operational": 3, 00:08:39.348 "base_bdevs_list": [ 00:08:39.348 { 00:08:39.348 "name": "BaseBdev1", 00:08:39.348 "uuid": "ff14679a-2424-40e6-98a2-ae59b6558f5a", 00:08:39.348 "is_configured": true, 00:08:39.348 "data_offset": 0, 00:08:39.348 "data_size": 65536 00:08:39.348 }, 00:08:39.348 { 00:08:39.348 "name": null, 00:08:39.348 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:39.348 "is_configured": false, 00:08:39.348 "data_offset": 0, 00:08:39.348 "data_size": 65536 00:08:39.348 }, 00:08:39.348 { 00:08:39.348 "name": "BaseBdev3", 00:08:39.348 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:39.348 "is_configured": true, 00:08:39.348 "data_offset": 0, 00:08:39.348 "data_size": 65536 00:08:39.348 } 00:08:39.348 ] 00:08:39.348 }' 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.348 18:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.917 [2024-11-16 18:49:23.234327] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.917 "name": "Existed_Raid", 00:08:39.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.917 "strip_size_kb": 64, 00:08:39.917 "state": "configuring", 00:08:39.917 "raid_level": "concat", 00:08:39.917 "superblock": false, 00:08:39.917 "num_base_bdevs": 3, 00:08:39.917 "num_base_bdevs_discovered": 1, 00:08:39.917 "num_base_bdevs_operational": 3, 00:08:39.917 "base_bdevs_list": [ 00:08:39.917 { 00:08:39.917 "name": "BaseBdev1", 00:08:39.917 "uuid": "ff14679a-2424-40e6-98a2-ae59b6558f5a", 00:08:39.917 "is_configured": true, 00:08:39.917 "data_offset": 0, 00:08:39.917 "data_size": 65536 00:08:39.917 }, 00:08:39.917 { 00:08:39.917 "name": null, 00:08:39.917 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:39.917 "is_configured": false, 00:08:39.917 "data_offset": 0, 00:08:39.917 "data_size": 65536 00:08:39.917 }, 00:08:39.917 { 00:08:39.917 "name": null, 00:08:39.917 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:39.917 "is_configured": false, 00:08:39.917 "data_offset": 0, 00:08:39.917 "data_size": 65536 00:08:39.917 } 00:08:39.917 ] 00:08:39.917 }' 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.917 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.486 [2024-11-16 18:49:23.717552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.486 "name": "Existed_Raid", 00:08:40.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.486 "strip_size_kb": 64, 00:08:40.486 "state": "configuring", 00:08:40.486 "raid_level": "concat", 00:08:40.486 "superblock": false, 00:08:40.486 "num_base_bdevs": 3, 00:08:40.486 "num_base_bdevs_discovered": 2, 00:08:40.486 "num_base_bdevs_operational": 3, 00:08:40.486 "base_bdevs_list": [ 00:08:40.486 { 00:08:40.486 "name": "BaseBdev1", 00:08:40.486 "uuid": "ff14679a-2424-40e6-98a2-ae59b6558f5a", 00:08:40.486 "is_configured": true, 00:08:40.486 "data_offset": 0, 00:08:40.486 "data_size": 65536 00:08:40.486 }, 00:08:40.486 { 00:08:40.486 "name": null, 00:08:40.486 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:40.486 "is_configured": false, 00:08:40.486 "data_offset": 0, 00:08:40.486 "data_size": 65536 00:08:40.486 }, 00:08:40.486 { 00:08:40.486 "name": "BaseBdev3", 00:08:40.486 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:40.486 "is_configured": true, 00:08:40.486 "data_offset": 0, 00:08:40.486 "data_size": 65536 00:08:40.486 } 00:08:40.486 ] 00:08:40.486 }' 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.486 18:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.746 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.746 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.746 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.746 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:40.746 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.746 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:40.746 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:40.746 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.746 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.746 [2024-11-16 18:49:24.168799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.005 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.005 "name": "Existed_Raid", 00:08:41.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.005 "strip_size_kb": 64, 00:08:41.005 "state": "configuring", 00:08:41.005 "raid_level": "concat", 00:08:41.005 "superblock": false, 00:08:41.005 "num_base_bdevs": 3, 00:08:41.005 "num_base_bdevs_discovered": 1, 00:08:41.005 "num_base_bdevs_operational": 3, 00:08:41.005 "base_bdevs_list": [ 00:08:41.005 { 00:08:41.005 "name": null, 00:08:41.005 "uuid": "ff14679a-2424-40e6-98a2-ae59b6558f5a", 00:08:41.005 "is_configured": false, 00:08:41.005 "data_offset": 0, 00:08:41.005 "data_size": 65536 00:08:41.005 }, 00:08:41.005 { 00:08:41.005 "name": null, 00:08:41.005 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:41.005 "is_configured": false, 00:08:41.005 "data_offset": 0, 00:08:41.005 "data_size": 65536 00:08:41.005 }, 00:08:41.005 { 00:08:41.005 "name": "BaseBdev3", 00:08:41.005 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:41.005 "is_configured": true, 00:08:41.005 "data_offset": 0, 00:08:41.005 "data_size": 65536 00:08:41.005 } 00:08:41.005 ] 00:08:41.005 }' 00:08:41.006 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.006 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.265 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.265 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.265 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.265 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:41.265 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.524 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:41.524 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:41.524 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.524 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.524 [2024-11-16 18:49:24.741849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:41.524 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.524 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:41.524 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.525 "name": "Existed_Raid", 00:08:41.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.525 "strip_size_kb": 64, 00:08:41.525 "state": "configuring", 00:08:41.525 "raid_level": "concat", 00:08:41.525 "superblock": false, 00:08:41.525 "num_base_bdevs": 3, 00:08:41.525 "num_base_bdevs_discovered": 2, 00:08:41.525 "num_base_bdevs_operational": 3, 00:08:41.525 "base_bdevs_list": [ 00:08:41.525 { 00:08:41.525 "name": null, 00:08:41.525 "uuid": "ff14679a-2424-40e6-98a2-ae59b6558f5a", 00:08:41.525 "is_configured": false, 00:08:41.525 "data_offset": 0, 00:08:41.525 "data_size": 65536 00:08:41.525 }, 00:08:41.525 { 00:08:41.525 "name": "BaseBdev2", 00:08:41.525 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:41.525 "is_configured": true, 00:08:41.525 "data_offset": 0, 00:08:41.525 "data_size": 65536 00:08:41.525 }, 00:08:41.525 { 00:08:41.525 "name": "BaseBdev3", 00:08:41.525 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:41.525 "is_configured": true, 00:08:41.525 "data_offset": 0, 00:08:41.525 "data_size": 65536 00:08:41.525 } 00:08:41.525 ] 00:08:41.525 }' 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.525 18:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.784 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:41.784 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.784 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:41.784 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.784 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ff14679a-2424-40e6-98a2-ae59b6558f5a 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.043 [2024-11-16 18:49:25.329275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:42.043 [2024-11-16 18:49:25.329317] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:42.043 [2024-11-16 18:49:25.329325] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:42.043 [2024-11-16 18:49:25.329564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:42.043 [2024-11-16 18:49:25.329753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:42.043 [2024-11-16 18:49:25.329764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:42.043 [2024-11-16 18:49:25.330021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.043 NewBaseBdev 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:42.043 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.044 [ 00:08:42.044 { 00:08:42.044 "name": "NewBaseBdev", 00:08:42.044 "aliases": [ 00:08:42.044 "ff14679a-2424-40e6-98a2-ae59b6558f5a" 00:08:42.044 ], 00:08:42.044 "product_name": "Malloc disk", 00:08:42.044 "block_size": 512, 00:08:42.044 "num_blocks": 65536, 00:08:42.044 "uuid": "ff14679a-2424-40e6-98a2-ae59b6558f5a", 00:08:42.044 "assigned_rate_limits": { 00:08:42.044 "rw_ios_per_sec": 0, 00:08:42.044 "rw_mbytes_per_sec": 0, 00:08:42.044 "r_mbytes_per_sec": 0, 00:08:42.044 "w_mbytes_per_sec": 0 00:08:42.044 }, 00:08:42.044 "claimed": true, 00:08:42.044 "claim_type": "exclusive_write", 00:08:42.044 "zoned": false, 00:08:42.044 "supported_io_types": { 00:08:42.044 "read": true, 00:08:42.044 "write": true, 00:08:42.044 "unmap": true, 00:08:42.044 "flush": true, 00:08:42.044 "reset": true, 00:08:42.044 "nvme_admin": false, 00:08:42.044 "nvme_io": false, 00:08:42.044 "nvme_io_md": false, 00:08:42.044 "write_zeroes": true, 00:08:42.044 "zcopy": true, 00:08:42.044 "get_zone_info": false, 00:08:42.044 "zone_management": false, 00:08:42.044 "zone_append": false, 00:08:42.044 "compare": false, 00:08:42.044 "compare_and_write": false, 00:08:42.044 "abort": true, 00:08:42.044 "seek_hole": false, 00:08:42.044 "seek_data": false, 00:08:42.044 "copy": true, 00:08:42.044 "nvme_iov_md": false 00:08:42.044 }, 00:08:42.044 "memory_domains": [ 00:08:42.044 { 00:08:42.044 "dma_device_id": "system", 00:08:42.044 "dma_device_type": 1 00:08:42.044 }, 00:08:42.044 { 00:08:42.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.044 "dma_device_type": 2 00:08:42.044 } 00:08:42.044 ], 00:08:42.044 "driver_specific": {} 00:08:42.044 } 00:08:42.044 ] 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.044 "name": "Existed_Raid", 00:08:42.044 "uuid": "045ba878-2e76-4d75-b830-a40189a4c4b0", 00:08:42.044 "strip_size_kb": 64, 00:08:42.044 "state": "online", 00:08:42.044 "raid_level": "concat", 00:08:42.044 "superblock": false, 00:08:42.044 "num_base_bdevs": 3, 00:08:42.044 "num_base_bdevs_discovered": 3, 00:08:42.044 "num_base_bdevs_operational": 3, 00:08:42.044 "base_bdevs_list": [ 00:08:42.044 { 00:08:42.044 "name": "NewBaseBdev", 00:08:42.044 "uuid": "ff14679a-2424-40e6-98a2-ae59b6558f5a", 00:08:42.044 "is_configured": true, 00:08:42.044 "data_offset": 0, 00:08:42.044 "data_size": 65536 00:08:42.044 }, 00:08:42.044 { 00:08:42.044 "name": "BaseBdev2", 00:08:42.044 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:42.044 "is_configured": true, 00:08:42.044 "data_offset": 0, 00:08:42.044 "data_size": 65536 00:08:42.044 }, 00:08:42.044 { 00:08:42.044 "name": "BaseBdev3", 00:08:42.044 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:42.044 "is_configured": true, 00:08:42.044 "data_offset": 0, 00:08:42.044 "data_size": 65536 00:08:42.044 } 00:08:42.044 ] 00:08:42.044 }' 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.044 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.303 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.303 [2024-11-16 18:49:25.768873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.562 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.562 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.562 "name": "Existed_Raid", 00:08:42.562 "aliases": [ 00:08:42.562 "045ba878-2e76-4d75-b830-a40189a4c4b0" 00:08:42.562 ], 00:08:42.562 "product_name": "Raid Volume", 00:08:42.562 "block_size": 512, 00:08:42.562 "num_blocks": 196608, 00:08:42.562 "uuid": "045ba878-2e76-4d75-b830-a40189a4c4b0", 00:08:42.562 "assigned_rate_limits": { 00:08:42.562 "rw_ios_per_sec": 0, 00:08:42.562 "rw_mbytes_per_sec": 0, 00:08:42.562 "r_mbytes_per_sec": 0, 00:08:42.562 "w_mbytes_per_sec": 0 00:08:42.562 }, 00:08:42.562 "claimed": false, 00:08:42.562 "zoned": false, 00:08:42.562 "supported_io_types": { 00:08:42.562 "read": true, 00:08:42.562 "write": true, 00:08:42.562 "unmap": true, 00:08:42.562 "flush": true, 00:08:42.562 "reset": true, 00:08:42.562 "nvme_admin": false, 00:08:42.562 "nvme_io": false, 00:08:42.563 "nvme_io_md": false, 00:08:42.563 "write_zeroes": true, 00:08:42.563 "zcopy": false, 00:08:42.563 "get_zone_info": false, 00:08:42.563 "zone_management": false, 00:08:42.563 "zone_append": false, 00:08:42.563 "compare": false, 00:08:42.563 "compare_and_write": false, 00:08:42.563 "abort": false, 00:08:42.563 "seek_hole": false, 00:08:42.563 "seek_data": false, 00:08:42.563 "copy": false, 00:08:42.563 "nvme_iov_md": false 00:08:42.563 }, 00:08:42.563 "memory_domains": [ 00:08:42.563 { 00:08:42.563 "dma_device_id": "system", 00:08:42.563 "dma_device_type": 1 00:08:42.563 }, 00:08:42.563 { 00:08:42.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.563 "dma_device_type": 2 00:08:42.563 }, 00:08:42.563 { 00:08:42.563 "dma_device_id": "system", 00:08:42.563 "dma_device_type": 1 00:08:42.563 }, 00:08:42.563 { 00:08:42.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.563 "dma_device_type": 2 00:08:42.563 }, 00:08:42.563 { 00:08:42.563 "dma_device_id": "system", 00:08:42.563 "dma_device_type": 1 00:08:42.563 }, 00:08:42.563 { 00:08:42.563 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.563 "dma_device_type": 2 00:08:42.563 } 00:08:42.563 ], 00:08:42.563 "driver_specific": { 00:08:42.563 "raid": { 00:08:42.563 "uuid": "045ba878-2e76-4d75-b830-a40189a4c4b0", 00:08:42.563 "strip_size_kb": 64, 00:08:42.563 "state": "online", 00:08:42.563 "raid_level": "concat", 00:08:42.563 "superblock": false, 00:08:42.563 "num_base_bdevs": 3, 00:08:42.563 "num_base_bdevs_discovered": 3, 00:08:42.563 "num_base_bdevs_operational": 3, 00:08:42.563 "base_bdevs_list": [ 00:08:42.563 { 00:08:42.563 "name": "NewBaseBdev", 00:08:42.563 "uuid": "ff14679a-2424-40e6-98a2-ae59b6558f5a", 00:08:42.563 "is_configured": true, 00:08:42.563 "data_offset": 0, 00:08:42.563 "data_size": 65536 00:08:42.563 }, 00:08:42.563 { 00:08:42.563 "name": "BaseBdev2", 00:08:42.563 "uuid": "ef0afb83-7053-452b-a0e4-c231b357e5bb", 00:08:42.563 "is_configured": true, 00:08:42.563 "data_offset": 0, 00:08:42.563 "data_size": 65536 00:08:42.563 }, 00:08:42.563 { 00:08:42.563 "name": "BaseBdev3", 00:08:42.563 "uuid": "0c95d683-ae23-436e-814e-0775b979be8b", 00:08:42.563 "is_configured": true, 00:08:42.563 "data_offset": 0, 00:08:42.563 "data_size": 65536 00:08:42.563 } 00:08:42.563 ] 00:08:42.563 } 00:08:42.563 } 00:08:42.563 }' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:42.563 BaseBdev2 00:08:42.563 BaseBdev3' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.563 18:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.563 [2024-11-16 18:49:26.008139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.563 [2024-11-16 18:49:26.008165] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.563 [2024-11-16 18:49:26.008231] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.563 [2024-11-16 18:49:26.008283] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.563 [2024-11-16 18:49:26.008294] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65431 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65431 ']' 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65431 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.563 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65431 00:08:42.822 killing process with pid 65431 00:08:42.822 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:42.822 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:42.822 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65431' 00:08:42.822 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65431 00:08:42.822 [2024-11-16 18:49:26.056166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:42.822 18:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65431 00:08:43.081 [2024-11-16 18:49:26.352311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:44.019 18:49:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:44.019 00:08:44.019 real 0m10.268s 00:08:44.019 user 0m16.380s 00:08:44.019 sys 0m1.709s 00:08:44.019 18:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:44.019 18:49:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.019 ************************************ 00:08:44.019 END TEST raid_state_function_test 00:08:44.019 ************************************ 00:08:44.019 18:49:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:08:44.019 18:49:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:44.019 18:49:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:44.019 18:49:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:44.019 ************************************ 00:08:44.019 START TEST raid_state_function_test_sb 00:08:44.019 ************************************ 00:08:44.019 18:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:08:44.019 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:44.019 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:44.019 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:44.019 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:44.278 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66052 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66052' 00:08:44.279 Process raid pid: 66052 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66052 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66052 ']' 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:44.279 18:49:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.279 [2024-11-16 18:49:27.584777] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:44.279 [2024-11-16 18:49:27.584893] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.538 [2024-11-16 18:49:27.753343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.538 [2024-11-16 18:49:27.868311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.797 [2024-11-16 18:49:28.070577] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.797 [2024-11-16 18:49:28.070614] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.075 [2024-11-16 18:49:28.417202] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.075 [2024-11-16 18:49:28.417256] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.075 [2024-11-16 18:49:28.417267] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.075 [2024-11-16 18:49:28.417276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.075 [2024-11-16 18:49:28.417282] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.075 [2024-11-16 18:49:28.417291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.075 "name": "Existed_Raid", 00:08:45.075 "uuid": "0276d0ef-3bff-4c22-8cc4-fd4e4e52d234", 00:08:45.075 "strip_size_kb": 64, 00:08:45.075 "state": "configuring", 00:08:45.075 "raid_level": "concat", 00:08:45.075 "superblock": true, 00:08:45.075 "num_base_bdevs": 3, 00:08:45.075 "num_base_bdevs_discovered": 0, 00:08:45.075 "num_base_bdevs_operational": 3, 00:08:45.075 "base_bdevs_list": [ 00:08:45.075 { 00:08:45.075 "name": "BaseBdev1", 00:08:45.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.075 "is_configured": false, 00:08:45.075 "data_offset": 0, 00:08:45.075 "data_size": 0 00:08:45.075 }, 00:08:45.075 { 00:08:45.075 "name": "BaseBdev2", 00:08:45.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.075 "is_configured": false, 00:08:45.075 "data_offset": 0, 00:08:45.075 "data_size": 0 00:08:45.075 }, 00:08:45.075 { 00:08:45.075 "name": "BaseBdev3", 00:08:45.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.075 "is_configured": false, 00:08:45.075 "data_offset": 0, 00:08:45.075 "data_size": 0 00:08:45.075 } 00:08:45.075 ] 00:08:45.075 }' 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.075 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.380 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.380 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.380 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.380 [2024-11-16 18:49:28.816481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.380 [2024-11-16 18:49:28.816567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:45.380 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.380 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.380 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.381 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.381 [2024-11-16 18:49:28.824465] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:45.381 [2024-11-16 18:49:28.824549] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:45.381 [2024-11-16 18:49:28.824577] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.381 [2024-11-16 18:49:28.824600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.381 [2024-11-16 18:49:28.824619] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.381 [2024-11-16 18:49:28.824640] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.381 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.381 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:45.381 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.381 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.640 [2024-11-16 18:49:28.867627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.640 BaseBdev1 00:08:45.640 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.640 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:45.640 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:45.640 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.640 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.641 [ 00:08:45.641 { 00:08:45.641 "name": "BaseBdev1", 00:08:45.641 "aliases": [ 00:08:45.641 "1fdb78ed-24e2-46c1-8f04-f7b068dd73ba" 00:08:45.641 ], 00:08:45.641 "product_name": "Malloc disk", 00:08:45.641 "block_size": 512, 00:08:45.641 "num_blocks": 65536, 00:08:45.641 "uuid": "1fdb78ed-24e2-46c1-8f04-f7b068dd73ba", 00:08:45.641 "assigned_rate_limits": { 00:08:45.641 "rw_ios_per_sec": 0, 00:08:45.641 "rw_mbytes_per_sec": 0, 00:08:45.641 "r_mbytes_per_sec": 0, 00:08:45.641 "w_mbytes_per_sec": 0 00:08:45.641 }, 00:08:45.641 "claimed": true, 00:08:45.641 "claim_type": "exclusive_write", 00:08:45.641 "zoned": false, 00:08:45.641 "supported_io_types": { 00:08:45.641 "read": true, 00:08:45.641 "write": true, 00:08:45.641 "unmap": true, 00:08:45.641 "flush": true, 00:08:45.641 "reset": true, 00:08:45.641 "nvme_admin": false, 00:08:45.641 "nvme_io": false, 00:08:45.641 "nvme_io_md": false, 00:08:45.641 "write_zeroes": true, 00:08:45.641 "zcopy": true, 00:08:45.641 "get_zone_info": false, 00:08:45.641 "zone_management": false, 00:08:45.641 "zone_append": false, 00:08:45.641 "compare": false, 00:08:45.641 "compare_and_write": false, 00:08:45.641 "abort": true, 00:08:45.641 "seek_hole": false, 00:08:45.641 "seek_data": false, 00:08:45.641 "copy": true, 00:08:45.641 "nvme_iov_md": false 00:08:45.641 }, 00:08:45.641 "memory_domains": [ 00:08:45.641 { 00:08:45.641 "dma_device_id": "system", 00:08:45.641 "dma_device_type": 1 00:08:45.641 }, 00:08:45.641 { 00:08:45.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:45.641 "dma_device_type": 2 00:08:45.641 } 00:08:45.641 ], 00:08:45.641 "driver_specific": {} 00:08:45.641 } 00:08:45.641 ] 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.641 "name": "Existed_Raid", 00:08:45.641 "uuid": "875d4196-32e7-42a5-9c85-3ff6b56eaa94", 00:08:45.641 "strip_size_kb": 64, 00:08:45.641 "state": "configuring", 00:08:45.641 "raid_level": "concat", 00:08:45.641 "superblock": true, 00:08:45.641 "num_base_bdevs": 3, 00:08:45.641 "num_base_bdevs_discovered": 1, 00:08:45.641 "num_base_bdevs_operational": 3, 00:08:45.641 "base_bdevs_list": [ 00:08:45.641 { 00:08:45.641 "name": "BaseBdev1", 00:08:45.641 "uuid": "1fdb78ed-24e2-46c1-8f04-f7b068dd73ba", 00:08:45.641 "is_configured": true, 00:08:45.641 "data_offset": 2048, 00:08:45.641 "data_size": 63488 00:08:45.641 }, 00:08:45.641 { 00:08:45.641 "name": "BaseBdev2", 00:08:45.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.641 "is_configured": false, 00:08:45.641 "data_offset": 0, 00:08:45.641 "data_size": 0 00:08:45.641 }, 00:08:45.641 { 00:08:45.641 "name": "BaseBdev3", 00:08:45.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.641 "is_configured": false, 00:08:45.641 "data_offset": 0, 00:08:45.641 "data_size": 0 00:08:45.641 } 00:08:45.641 ] 00:08:45.641 }' 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.641 18:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.904 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:45.904 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.904 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.904 [2024-11-16 18:49:29.306914] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:45.904 [2024-11-16 18:49:29.307040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:45.904 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.904 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:45.904 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.905 [2024-11-16 18:49:29.314945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:45.905 [2024-11-16 18:49:29.316900] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:45.905 [2024-11-16 18:49:29.316978] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:45.905 [2024-11-16 18:49:29.316993] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:45.905 [2024-11-16 18:49:29.317003] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.905 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:45.905 "name": "Existed_Raid", 00:08:45.905 "uuid": "fb413640-b58c-40f3-8bd6-44301d45ab72", 00:08:45.905 "strip_size_kb": 64, 00:08:45.905 "state": "configuring", 00:08:45.905 "raid_level": "concat", 00:08:45.905 "superblock": true, 00:08:45.905 "num_base_bdevs": 3, 00:08:45.905 "num_base_bdevs_discovered": 1, 00:08:45.905 "num_base_bdevs_operational": 3, 00:08:45.905 "base_bdevs_list": [ 00:08:45.905 { 00:08:45.905 "name": "BaseBdev1", 00:08:45.905 "uuid": "1fdb78ed-24e2-46c1-8f04-f7b068dd73ba", 00:08:45.905 "is_configured": true, 00:08:45.905 "data_offset": 2048, 00:08:45.905 "data_size": 63488 00:08:45.905 }, 00:08:45.905 { 00:08:45.905 "name": "BaseBdev2", 00:08:45.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.906 "is_configured": false, 00:08:45.906 "data_offset": 0, 00:08:45.906 "data_size": 0 00:08:45.906 }, 00:08:45.906 { 00:08:45.906 "name": "BaseBdev3", 00:08:45.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.906 "is_configured": false, 00:08:45.906 "data_offset": 0, 00:08:45.906 "data_size": 0 00:08:45.906 } 00:08:45.906 ] 00:08:45.906 }' 00:08:45.906 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:45.906 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 [2024-11-16 18:49:29.763014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.479 BaseBdev2 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 [ 00:08:46.479 { 00:08:46.479 "name": "BaseBdev2", 00:08:46.479 "aliases": [ 00:08:46.479 "f6df3ae6-e569-432a-ba9e-66b55cb48b69" 00:08:46.479 ], 00:08:46.479 "product_name": "Malloc disk", 00:08:46.479 "block_size": 512, 00:08:46.479 "num_blocks": 65536, 00:08:46.479 "uuid": "f6df3ae6-e569-432a-ba9e-66b55cb48b69", 00:08:46.479 "assigned_rate_limits": { 00:08:46.479 "rw_ios_per_sec": 0, 00:08:46.479 "rw_mbytes_per_sec": 0, 00:08:46.479 "r_mbytes_per_sec": 0, 00:08:46.479 "w_mbytes_per_sec": 0 00:08:46.479 }, 00:08:46.479 "claimed": true, 00:08:46.479 "claim_type": "exclusive_write", 00:08:46.479 "zoned": false, 00:08:46.479 "supported_io_types": { 00:08:46.479 "read": true, 00:08:46.479 "write": true, 00:08:46.479 "unmap": true, 00:08:46.479 "flush": true, 00:08:46.479 "reset": true, 00:08:46.479 "nvme_admin": false, 00:08:46.479 "nvme_io": false, 00:08:46.479 "nvme_io_md": false, 00:08:46.479 "write_zeroes": true, 00:08:46.479 "zcopy": true, 00:08:46.479 "get_zone_info": false, 00:08:46.479 "zone_management": false, 00:08:46.479 "zone_append": false, 00:08:46.479 "compare": false, 00:08:46.479 "compare_and_write": false, 00:08:46.479 "abort": true, 00:08:46.479 "seek_hole": false, 00:08:46.479 "seek_data": false, 00:08:46.479 "copy": true, 00:08:46.479 "nvme_iov_md": false 00:08:46.479 }, 00:08:46.479 "memory_domains": [ 00:08:46.479 { 00:08:46.479 "dma_device_id": "system", 00:08:46.479 "dma_device_type": 1 00:08:46.479 }, 00:08:46.479 { 00:08:46.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.479 "dma_device_type": 2 00:08:46.479 } 00:08:46.479 ], 00:08:46.479 "driver_specific": {} 00:08:46.479 } 00:08:46.479 ] 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.479 "name": "Existed_Raid", 00:08:46.479 "uuid": "fb413640-b58c-40f3-8bd6-44301d45ab72", 00:08:46.479 "strip_size_kb": 64, 00:08:46.479 "state": "configuring", 00:08:46.479 "raid_level": "concat", 00:08:46.479 "superblock": true, 00:08:46.479 "num_base_bdevs": 3, 00:08:46.479 "num_base_bdevs_discovered": 2, 00:08:46.479 "num_base_bdevs_operational": 3, 00:08:46.479 "base_bdevs_list": [ 00:08:46.479 { 00:08:46.479 "name": "BaseBdev1", 00:08:46.479 "uuid": "1fdb78ed-24e2-46c1-8f04-f7b068dd73ba", 00:08:46.479 "is_configured": true, 00:08:46.479 "data_offset": 2048, 00:08:46.479 "data_size": 63488 00:08:46.479 }, 00:08:46.479 { 00:08:46.479 "name": "BaseBdev2", 00:08:46.479 "uuid": "f6df3ae6-e569-432a-ba9e-66b55cb48b69", 00:08:46.479 "is_configured": true, 00:08:46.479 "data_offset": 2048, 00:08:46.479 "data_size": 63488 00:08:46.479 }, 00:08:46.479 { 00:08:46.479 "name": "BaseBdev3", 00:08:46.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.479 "is_configured": false, 00:08:46.479 "data_offset": 0, 00:08:46.479 "data_size": 0 00:08:46.479 } 00:08:46.479 ] 00:08:46.479 }' 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.479 18:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.049 [2024-11-16 18:49:30.271119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:47.049 [2024-11-16 18:49:30.271483] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.049 [2024-11-16 18:49:30.271555] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:47.049 [2024-11-16 18:49:30.271868] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:47.049 BaseBdev3 00:08:47.049 [2024-11-16 18:49:30.272081] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.049 [2024-11-16 18:49:30.272123] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:47.049 [2024-11-16 18:49:30.272313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:47.049 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.050 [ 00:08:47.050 { 00:08:47.050 "name": "BaseBdev3", 00:08:47.050 "aliases": [ 00:08:47.050 "b658c279-2ac9-45fa-90a1-7f2f93c065af" 00:08:47.050 ], 00:08:47.050 "product_name": "Malloc disk", 00:08:47.050 "block_size": 512, 00:08:47.050 "num_blocks": 65536, 00:08:47.050 "uuid": "b658c279-2ac9-45fa-90a1-7f2f93c065af", 00:08:47.050 "assigned_rate_limits": { 00:08:47.050 "rw_ios_per_sec": 0, 00:08:47.050 "rw_mbytes_per_sec": 0, 00:08:47.050 "r_mbytes_per_sec": 0, 00:08:47.050 "w_mbytes_per_sec": 0 00:08:47.050 }, 00:08:47.050 "claimed": true, 00:08:47.050 "claim_type": "exclusive_write", 00:08:47.050 "zoned": false, 00:08:47.050 "supported_io_types": { 00:08:47.050 "read": true, 00:08:47.050 "write": true, 00:08:47.050 "unmap": true, 00:08:47.050 "flush": true, 00:08:47.050 "reset": true, 00:08:47.050 "nvme_admin": false, 00:08:47.050 "nvme_io": false, 00:08:47.050 "nvme_io_md": false, 00:08:47.050 "write_zeroes": true, 00:08:47.050 "zcopy": true, 00:08:47.050 "get_zone_info": false, 00:08:47.050 "zone_management": false, 00:08:47.050 "zone_append": false, 00:08:47.050 "compare": false, 00:08:47.050 "compare_and_write": false, 00:08:47.050 "abort": true, 00:08:47.050 "seek_hole": false, 00:08:47.050 "seek_data": false, 00:08:47.050 "copy": true, 00:08:47.050 "nvme_iov_md": false 00:08:47.050 }, 00:08:47.050 "memory_domains": [ 00:08:47.050 { 00:08:47.050 "dma_device_id": "system", 00:08:47.050 "dma_device_type": 1 00:08:47.050 }, 00:08:47.050 { 00:08:47.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.050 "dma_device_type": 2 00:08:47.050 } 00:08:47.050 ], 00:08:47.050 "driver_specific": {} 00:08:47.050 } 00:08:47.050 ] 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.050 "name": "Existed_Raid", 00:08:47.050 "uuid": "fb413640-b58c-40f3-8bd6-44301d45ab72", 00:08:47.050 "strip_size_kb": 64, 00:08:47.050 "state": "online", 00:08:47.050 "raid_level": "concat", 00:08:47.050 "superblock": true, 00:08:47.050 "num_base_bdevs": 3, 00:08:47.050 "num_base_bdevs_discovered": 3, 00:08:47.050 "num_base_bdevs_operational": 3, 00:08:47.050 "base_bdevs_list": [ 00:08:47.050 { 00:08:47.050 "name": "BaseBdev1", 00:08:47.050 "uuid": "1fdb78ed-24e2-46c1-8f04-f7b068dd73ba", 00:08:47.050 "is_configured": true, 00:08:47.050 "data_offset": 2048, 00:08:47.050 "data_size": 63488 00:08:47.050 }, 00:08:47.050 { 00:08:47.050 "name": "BaseBdev2", 00:08:47.050 "uuid": "f6df3ae6-e569-432a-ba9e-66b55cb48b69", 00:08:47.050 "is_configured": true, 00:08:47.050 "data_offset": 2048, 00:08:47.050 "data_size": 63488 00:08:47.050 }, 00:08:47.050 { 00:08:47.050 "name": "BaseBdev3", 00:08:47.050 "uuid": "b658c279-2ac9-45fa-90a1-7f2f93c065af", 00:08:47.050 "is_configured": true, 00:08:47.050 "data_offset": 2048, 00:08:47.050 "data_size": 63488 00:08:47.050 } 00:08:47.050 ] 00:08:47.050 }' 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.050 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:47.310 [2024-11-16 18:49:30.738657] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:47.310 "name": "Existed_Raid", 00:08:47.310 "aliases": [ 00:08:47.310 "fb413640-b58c-40f3-8bd6-44301d45ab72" 00:08:47.310 ], 00:08:47.310 "product_name": "Raid Volume", 00:08:47.310 "block_size": 512, 00:08:47.310 "num_blocks": 190464, 00:08:47.310 "uuid": "fb413640-b58c-40f3-8bd6-44301d45ab72", 00:08:47.310 "assigned_rate_limits": { 00:08:47.310 "rw_ios_per_sec": 0, 00:08:47.310 "rw_mbytes_per_sec": 0, 00:08:47.310 "r_mbytes_per_sec": 0, 00:08:47.310 "w_mbytes_per_sec": 0 00:08:47.310 }, 00:08:47.310 "claimed": false, 00:08:47.310 "zoned": false, 00:08:47.310 "supported_io_types": { 00:08:47.310 "read": true, 00:08:47.310 "write": true, 00:08:47.310 "unmap": true, 00:08:47.310 "flush": true, 00:08:47.310 "reset": true, 00:08:47.310 "nvme_admin": false, 00:08:47.310 "nvme_io": false, 00:08:47.310 "nvme_io_md": false, 00:08:47.310 "write_zeroes": true, 00:08:47.310 "zcopy": false, 00:08:47.310 "get_zone_info": false, 00:08:47.310 "zone_management": false, 00:08:47.310 "zone_append": false, 00:08:47.310 "compare": false, 00:08:47.310 "compare_and_write": false, 00:08:47.310 "abort": false, 00:08:47.310 "seek_hole": false, 00:08:47.310 "seek_data": false, 00:08:47.310 "copy": false, 00:08:47.310 "nvme_iov_md": false 00:08:47.310 }, 00:08:47.310 "memory_domains": [ 00:08:47.310 { 00:08:47.310 "dma_device_id": "system", 00:08:47.310 "dma_device_type": 1 00:08:47.310 }, 00:08:47.310 { 00:08:47.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.310 "dma_device_type": 2 00:08:47.310 }, 00:08:47.310 { 00:08:47.310 "dma_device_id": "system", 00:08:47.310 "dma_device_type": 1 00:08:47.310 }, 00:08:47.310 { 00:08:47.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.310 "dma_device_type": 2 00:08:47.310 }, 00:08:47.310 { 00:08:47.310 "dma_device_id": "system", 00:08:47.310 "dma_device_type": 1 00:08:47.310 }, 00:08:47.310 { 00:08:47.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.310 "dma_device_type": 2 00:08:47.310 } 00:08:47.310 ], 00:08:47.310 "driver_specific": { 00:08:47.310 "raid": { 00:08:47.310 "uuid": "fb413640-b58c-40f3-8bd6-44301d45ab72", 00:08:47.310 "strip_size_kb": 64, 00:08:47.310 "state": "online", 00:08:47.310 "raid_level": "concat", 00:08:47.310 "superblock": true, 00:08:47.310 "num_base_bdevs": 3, 00:08:47.310 "num_base_bdevs_discovered": 3, 00:08:47.310 "num_base_bdevs_operational": 3, 00:08:47.310 "base_bdevs_list": [ 00:08:47.310 { 00:08:47.310 "name": "BaseBdev1", 00:08:47.310 "uuid": "1fdb78ed-24e2-46c1-8f04-f7b068dd73ba", 00:08:47.310 "is_configured": true, 00:08:47.310 "data_offset": 2048, 00:08:47.310 "data_size": 63488 00:08:47.310 }, 00:08:47.310 { 00:08:47.310 "name": "BaseBdev2", 00:08:47.310 "uuid": "f6df3ae6-e569-432a-ba9e-66b55cb48b69", 00:08:47.310 "is_configured": true, 00:08:47.310 "data_offset": 2048, 00:08:47.310 "data_size": 63488 00:08:47.310 }, 00:08:47.310 { 00:08:47.310 "name": "BaseBdev3", 00:08:47.310 "uuid": "b658c279-2ac9-45fa-90a1-7f2f93c065af", 00:08:47.310 "is_configured": true, 00:08:47.310 "data_offset": 2048, 00:08:47.310 "data_size": 63488 00:08:47.310 } 00:08:47.310 ] 00:08:47.310 } 00:08:47.310 } 00:08:47.310 }' 00:08:47.310 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:47.570 BaseBdev2 00:08:47.570 BaseBdev3' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.570 18:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.570 [2024-11-16 18:49:30.942038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:47.570 [2024-11-16 18:49:30.942104] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:47.571 [2024-11-16 18:49:30.942185] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:47.571 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.571 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:47.571 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:47.571 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:47.571 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:47.571 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:47.571 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:08:47.830 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.830 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:47.830 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.831 "name": "Existed_Raid", 00:08:47.831 "uuid": "fb413640-b58c-40f3-8bd6-44301d45ab72", 00:08:47.831 "strip_size_kb": 64, 00:08:47.831 "state": "offline", 00:08:47.831 "raid_level": "concat", 00:08:47.831 "superblock": true, 00:08:47.831 "num_base_bdevs": 3, 00:08:47.831 "num_base_bdevs_discovered": 2, 00:08:47.831 "num_base_bdevs_operational": 2, 00:08:47.831 "base_bdevs_list": [ 00:08:47.831 { 00:08:47.831 "name": null, 00:08:47.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.831 "is_configured": false, 00:08:47.831 "data_offset": 0, 00:08:47.831 "data_size": 63488 00:08:47.831 }, 00:08:47.831 { 00:08:47.831 "name": "BaseBdev2", 00:08:47.831 "uuid": "f6df3ae6-e569-432a-ba9e-66b55cb48b69", 00:08:47.831 "is_configured": true, 00:08:47.831 "data_offset": 2048, 00:08:47.831 "data_size": 63488 00:08:47.831 }, 00:08:47.831 { 00:08:47.831 "name": "BaseBdev3", 00:08:47.831 "uuid": "b658c279-2ac9-45fa-90a1-7f2f93c065af", 00:08:47.831 "is_configured": true, 00:08:47.831 "data_offset": 2048, 00:08:47.831 "data_size": 63488 00:08:47.831 } 00:08:47.831 ] 00:08:47.831 }' 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.831 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.091 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.091 [2024-11-16 18:49:31.485503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.351 [2024-11-16 18:49:31.635011] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:48.351 [2024-11-16 18:49:31.635118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.351 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.612 BaseBdev2 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.612 [ 00:08:48.612 { 00:08:48.612 "name": "BaseBdev2", 00:08:48.612 "aliases": [ 00:08:48.612 "00280b55-7f50-44e6-b9ff-883e7f977a32" 00:08:48.612 ], 00:08:48.612 "product_name": "Malloc disk", 00:08:48.612 "block_size": 512, 00:08:48.612 "num_blocks": 65536, 00:08:48.612 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:48.612 "assigned_rate_limits": { 00:08:48.612 "rw_ios_per_sec": 0, 00:08:48.612 "rw_mbytes_per_sec": 0, 00:08:48.612 "r_mbytes_per_sec": 0, 00:08:48.612 "w_mbytes_per_sec": 0 00:08:48.612 }, 00:08:48.612 "claimed": false, 00:08:48.612 "zoned": false, 00:08:48.612 "supported_io_types": { 00:08:48.612 "read": true, 00:08:48.612 "write": true, 00:08:48.612 "unmap": true, 00:08:48.612 "flush": true, 00:08:48.612 "reset": true, 00:08:48.612 "nvme_admin": false, 00:08:48.612 "nvme_io": false, 00:08:48.612 "nvme_io_md": false, 00:08:48.612 "write_zeroes": true, 00:08:48.612 "zcopy": true, 00:08:48.612 "get_zone_info": false, 00:08:48.612 "zone_management": false, 00:08:48.612 "zone_append": false, 00:08:48.612 "compare": false, 00:08:48.612 "compare_and_write": false, 00:08:48.612 "abort": true, 00:08:48.612 "seek_hole": false, 00:08:48.612 "seek_data": false, 00:08:48.612 "copy": true, 00:08:48.612 "nvme_iov_md": false 00:08:48.612 }, 00:08:48.612 "memory_domains": [ 00:08:48.612 { 00:08:48.612 "dma_device_id": "system", 00:08:48.612 "dma_device_type": 1 00:08:48.612 }, 00:08:48.612 { 00:08:48.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.612 "dma_device_type": 2 00:08:48.612 } 00:08:48.612 ], 00:08:48.612 "driver_specific": {} 00:08:48.612 } 00:08:48.612 ] 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.612 BaseBdev3 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.612 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.612 [ 00:08:48.612 { 00:08:48.612 "name": "BaseBdev3", 00:08:48.612 "aliases": [ 00:08:48.612 "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036" 00:08:48.612 ], 00:08:48.612 "product_name": "Malloc disk", 00:08:48.612 "block_size": 512, 00:08:48.612 "num_blocks": 65536, 00:08:48.612 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:48.612 "assigned_rate_limits": { 00:08:48.612 "rw_ios_per_sec": 0, 00:08:48.612 "rw_mbytes_per_sec": 0, 00:08:48.612 "r_mbytes_per_sec": 0, 00:08:48.612 "w_mbytes_per_sec": 0 00:08:48.612 }, 00:08:48.612 "claimed": false, 00:08:48.612 "zoned": false, 00:08:48.612 "supported_io_types": { 00:08:48.612 "read": true, 00:08:48.612 "write": true, 00:08:48.612 "unmap": true, 00:08:48.612 "flush": true, 00:08:48.612 "reset": true, 00:08:48.612 "nvme_admin": false, 00:08:48.612 "nvme_io": false, 00:08:48.612 "nvme_io_md": false, 00:08:48.612 "write_zeroes": true, 00:08:48.612 "zcopy": true, 00:08:48.612 "get_zone_info": false, 00:08:48.612 "zone_management": false, 00:08:48.612 "zone_append": false, 00:08:48.613 "compare": false, 00:08:48.613 "compare_and_write": false, 00:08:48.613 "abort": true, 00:08:48.613 "seek_hole": false, 00:08:48.613 "seek_data": false, 00:08:48.613 "copy": true, 00:08:48.613 "nvme_iov_md": false 00:08:48.613 }, 00:08:48.613 "memory_domains": [ 00:08:48.613 { 00:08:48.613 "dma_device_id": "system", 00:08:48.613 "dma_device_type": 1 00:08:48.613 }, 00:08:48.613 { 00:08:48.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.613 "dma_device_type": 2 00:08:48.613 } 00:08:48.613 ], 00:08:48.613 "driver_specific": {} 00:08:48.613 } 00:08:48.613 ] 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.613 [2024-11-16 18:49:31.935843] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:48.613 [2024-11-16 18:49:31.935949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:48.613 [2024-11-16 18:49:31.935993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.613 [2024-11-16 18:49:31.937907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.613 "name": "Existed_Raid", 00:08:48.613 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:48.613 "strip_size_kb": 64, 00:08:48.613 "state": "configuring", 00:08:48.613 "raid_level": "concat", 00:08:48.613 "superblock": true, 00:08:48.613 "num_base_bdevs": 3, 00:08:48.613 "num_base_bdevs_discovered": 2, 00:08:48.613 "num_base_bdevs_operational": 3, 00:08:48.613 "base_bdevs_list": [ 00:08:48.613 { 00:08:48.613 "name": "BaseBdev1", 00:08:48.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.613 "is_configured": false, 00:08:48.613 "data_offset": 0, 00:08:48.613 "data_size": 0 00:08:48.613 }, 00:08:48.613 { 00:08:48.613 "name": "BaseBdev2", 00:08:48.613 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:48.613 "is_configured": true, 00:08:48.613 "data_offset": 2048, 00:08:48.613 "data_size": 63488 00:08:48.613 }, 00:08:48.613 { 00:08:48.613 "name": "BaseBdev3", 00:08:48.613 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:48.613 "is_configured": true, 00:08:48.613 "data_offset": 2048, 00:08:48.613 "data_size": 63488 00:08:48.613 } 00:08:48.613 ] 00:08:48.613 }' 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.613 18:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.182 [2024-11-16 18:49:32.431012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.182 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.183 "name": "Existed_Raid", 00:08:49.183 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:49.183 "strip_size_kb": 64, 00:08:49.183 "state": "configuring", 00:08:49.183 "raid_level": "concat", 00:08:49.183 "superblock": true, 00:08:49.183 "num_base_bdevs": 3, 00:08:49.183 "num_base_bdevs_discovered": 1, 00:08:49.183 "num_base_bdevs_operational": 3, 00:08:49.183 "base_bdevs_list": [ 00:08:49.183 { 00:08:49.183 "name": "BaseBdev1", 00:08:49.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.183 "is_configured": false, 00:08:49.183 "data_offset": 0, 00:08:49.183 "data_size": 0 00:08:49.183 }, 00:08:49.183 { 00:08:49.183 "name": null, 00:08:49.183 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:49.183 "is_configured": false, 00:08:49.183 "data_offset": 0, 00:08:49.183 "data_size": 63488 00:08:49.183 }, 00:08:49.183 { 00:08:49.183 "name": "BaseBdev3", 00:08:49.183 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:49.183 "is_configured": true, 00:08:49.183 "data_offset": 2048, 00:08:49.183 "data_size": 63488 00:08:49.183 } 00:08:49.183 ] 00:08:49.183 }' 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.183 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.442 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.442 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:49.442 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.442 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.442 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.442 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:49.442 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:49.442 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.442 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 [2024-11-16 18:49:32.922974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:49.702 BaseBdev1 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 [ 00:08:49.702 { 00:08:49.702 "name": "BaseBdev1", 00:08:49.702 "aliases": [ 00:08:49.702 "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510" 00:08:49.702 ], 00:08:49.702 "product_name": "Malloc disk", 00:08:49.702 "block_size": 512, 00:08:49.702 "num_blocks": 65536, 00:08:49.702 "uuid": "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510", 00:08:49.702 "assigned_rate_limits": { 00:08:49.702 "rw_ios_per_sec": 0, 00:08:49.702 "rw_mbytes_per_sec": 0, 00:08:49.702 "r_mbytes_per_sec": 0, 00:08:49.702 "w_mbytes_per_sec": 0 00:08:49.702 }, 00:08:49.702 "claimed": true, 00:08:49.702 "claim_type": "exclusive_write", 00:08:49.702 "zoned": false, 00:08:49.702 "supported_io_types": { 00:08:49.702 "read": true, 00:08:49.702 "write": true, 00:08:49.702 "unmap": true, 00:08:49.702 "flush": true, 00:08:49.702 "reset": true, 00:08:49.702 "nvme_admin": false, 00:08:49.702 "nvme_io": false, 00:08:49.702 "nvme_io_md": false, 00:08:49.702 "write_zeroes": true, 00:08:49.702 "zcopy": true, 00:08:49.702 "get_zone_info": false, 00:08:49.702 "zone_management": false, 00:08:49.702 "zone_append": false, 00:08:49.702 "compare": false, 00:08:49.702 "compare_and_write": false, 00:08:49.702 "abort": true, 00:08:49.702 "seek_hole": false, 00:08:49.702 "seek_data": false, 00:08:49.702 "copy": true, 00:08:49.702 "nvme_iov_md": false 00:08:49.702 }, 00:08:49.702 "memory_domains": [ 00:08:49.702 { 00:08:49.702 "dma_device_id": "system", 00:08:49.702 "dma_device_type": 1 00:08:49.702 }, 00:08:49.702 { 00:08:49.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.702 "dma_device_type": 2 00:08:49.702 } 00:08:49.702 ], 00:08:49.702 "driver_specific": {} 00:08:49.702 } 00:08:49.702 ] 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.702 18:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.702 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.702 "name": "Existed_Raid", 00:08:49.702 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:49.702 "strip_size_kb": 64, 00:08:49.702 "state": "configuring", 00:08:49.702 "raid_level": "concat", 00:08:49.702 "superblock": true, 00:08:49.702 "num_base_bdevs": 3, 00:08:49.702 "num_base_bdevs_discovered": 2, 00:08:49.702 "num_base_bdevs_operational": 3, 00:08:49.702 "base_bdevs_list": [ 00:08:49.702 { 00:08:49.702 "name": "BaseBdev1", 00:08:49.702 "uuid": "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510", 00:08:49.702 "is_configured": true, 00:08:49.702 "data_offset": 2048, 00:08:49.702 "data_size": 63488 00:08:49.702 }, 00:08:49.702 { 00:08:49.702 "name": null, 00:08:49.702 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:49.702 "is_configured": false, 00:08:49.702 "data_offset": 0, 00:08:49.702 "data_size": 63488 00:08:49.702 }, 00:08:49.702 { 00:08:49.702 "name": "BaseBdev3", 00:08:49.702 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:49.702 "is_configured": true, 00:08:49.702 "data_offset": 2048, 00:08:49.702 "data_size": 63488 00:08:49.702 } 00:08:49.702 ] 00:08:49.702 }' 00:08:49.702 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.702 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.963 [2024-11-16 18:49:33.414171] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.963 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.222 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.222 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.222 "name": "Existed_Raid", 00:08:50.222 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:50.222 "strip_size_kb": 64, 00:08:50.222 "state": "configuring", 00:08:50.222 "raid_level": "concat", 00:08:50.222 "superblock": true, 00:08:50.222 "num_base_bdevs": 3, 00:08:50.222 "num_base_bdevs_discovered": 1, 00:08:50.222 "num_base_bdevs_operational": 3, 00:08:50.222 "base_bdevs_list": [ 00:08:50.222 { 00:08:50.222 "name": "BaseBdev1", 00:08:50.222 "uuid": "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510", 00:08:50.222 "is_configured": true, 00:08:50.222 "data_offset": 2048, 00:08:50.222 "data_size": 63488 00:08:50.222 }, 00:08:50.222 { 00:08:50.222 "name": null, 00:08:50.222 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:50.222 "is_configured": false, 00:08:50.223 "data_offset": 0, 00:08:50.223 "data_size": 63488 00:08:50.223 }, 00:08:50.223 { 00:08:50.223 "name": null, 00:08:50.223 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:50.223 "is_configured": false, 00:08:50.223 "data_offset": 0, 00:08:50.223 "data_size": 63488 00:08:50.223 } 00:08:50.223 ] 00:08:50.223 }' 00:08:50.223 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.223 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.482 [2024-11-16 18:49:33.829487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.482 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.483 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.483 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.483 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.483 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.483 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.483 "name": "Existed_Raid", 00:08:50.483 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:50.483 "strip_size_kb": 64, 00:08:50.483 "state": "configuring", 00:08:50.483 "raid_level": "concat", 00:08:50.483 "superblock": true, 00:08:50.483 "num_base_bdevs": 3, 00:08:50.483 "num_base_bdevs_discovered": 2, 00:08:50.483 "num_base_bdevs_operational": 3, 00:08:50.483 "base_bdevs_list": [ 00:08:50.483 { 00:08:50.483 "name": "BaseBdev1", 00:08:50.483 "uuid": "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510", 00:08:50.483 "is_configured": true, 00:08:50.483 "data_offset": 2048, 00:08:50.483 "data_size": 63488 00:08:50.483 }, 00:08:50.483 { 00:08:50.483 "name": null, 00:08:50.483 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:50.483 "is_configured": false, 00:08:50.483 "data_offset": 0, 00:08:50.483 "data_size": 63488 00:08:50.483 }, 00:08:50.483 { 00:08:50.483 "name": "BaseBdev3", 00:08:50.483 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:50.483 "is_configured": true, 00:08:50.483 "data_offset": 2048, 00:08:50.483 "data_size": 63488 00:08:50.483 } 00:08:50.483 ] 00:08:50.483 }' 00:08:50.483 18:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.483 18:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.052 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.052 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:51.052 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.052 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.053 [2024-11-16 18:49:34.276782] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.053 "name": "Existed_Raid", 00:08:51.053 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:51.053 "strip_size_kb": 64, 00:08:51.053 "state": "configuring", 00:08:51.053 "raid_level": "concat", 00:08:51.053 "superblock": true, 00:08:51.053 "num_base_bdevs": 3, 00:08:51.053 "num_base_bdevs_discovered": 1, 00:08:51.053 "num_base_bdevs_operational": 3, 00:08:51.053 "base_bdevs_list": [ 00:08:51.053 { 00:08:51.053 "name": null, 00:08:51.053 "uuid": "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510", 00:08:51.053 "is_configured": false, 00:08:51.053 "data_offset": 0, 00:08:51.053 "data_size": 63488 00:08:51.053 }, 00:08:51.053 { 00:08:51.053 "name": null, 00:08:51.053 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:51.053 "is_configured": false, 00:08:51.053 "data_offset": 0, 00:08:51.053 "data_size": 63488 00:08:51.053 }, 00:08:51.053 { 00:08:51.053 "name": "BaseBdev3", 00:08:51.053 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:51.053 "is_configured": true, 00:08:51.053 "data_offset": 2048, 00:08:51.053 "data_size": 63488 00:08:51.053 } 00:08:51.053 ] 00:08:51.053 }' 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.053 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.622 [2024-11-16 18:49:34.866143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.622 "name": "Existed_Raid", 00:08:51.622 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:51.622 "strip_size_kb": 64, 00:08:51.622 "state": "configuring", 00:08:51.622 "raid_level": "concat", 00:08:51.622 "superblock": true, 00:08:51.622 "num_base_bdevs": 3, 00:08:51.622 "num_base_bdevs_discovered": 2, 00:08:51.622 "num_base_bdevs_operational": 3, 00:08:51.622 "base_bdevs_list": [ 00:08:51.622 { 00:08:51.622 "name": null, 00:08:51.622 "uuid": "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510", 00:08:51.622 "is_configured": false, 00:08:51.622 "data_offset": 0, 00:08:51.622 "data_size": 63488 00:08:51.622 }, 00:08:51.622 { 00:08:51.622 "name": "BaseBdev2", 00:08:51.622 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:51.622 "is_configured": true, 00:08:51.622 "data_offset": 2048, 00:08:51.622 "data_size": 63488 00:08:51.622 }, 00:08:51.622 { 00:08:51.622 "name": "BaseBdev3", 00:08:51.622 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:51.622 "is_configured": true, 00:08:51.622 "data_offset": 2048, 00:08:51.622 "data_size": 63488 00:08:51.622 } 00:08:51.622 ] 00:08:51.622 }' 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.622 18:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.882 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.882 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.882 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.882 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:51.882 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.142 [2024-11-16 18:49:35.441534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:52.142 [2024-11-16 18:49:35.441856] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:52.142 [2024-11-16 18:49:35.441877] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:52.142 [2024-11-16 18:49:35.442117] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:52.142 [2024-11-16 18:49:35.442269] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:52.142 [2024-11-16 18:49:35.442278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:52.142 [2024-11-16 18:49:35.442402] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.142 NewBaseBdev 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.142 [ 00:08:52.142 { 00:08:52.142 "name": "NewBaseBdev", 00:08:52.142 "aliases": [ 00:08:52.142 "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510" 00:08:52.142 ], 00:08:52.142 "product_name": "Malloc disk", 00:08:52.142 "block_size": 512, 00:08:52.142 "num_blocks": 65536, 00:08:52.142 "uuid": "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510", 00:08:52.142 "assigned_rate_limits": { 00:08:52.142 "rw_ios_per_sec": 0, 00:08:52.142 "rw_mbytes_per_sec": 0, 00:08:52.142 "r_mbytes_per_sec": 0, 00:08:52.142 "w_mbytes_per_sec": 0 00:08:52.142 }, 00:08:52.142 "claimed": true, 00:08:52.142 "claim_type": "exclusive_write", 00:08:52.142 "zoned": false, 00:08:52.142 "supported_io_types": { 00:08:52.142 "read": true, 00:08:52.142 "write": true, 00:08:52.142 "unmap": true, 00:08:52.142 "flush": true, 00:08:52.142 "reset": true, 00:08:52.142 "nvme_admin": false, 00:08:52.142 "nvme_io": false, 00:08:52.142 "nvme_io_md": false, 00:08:52.142 "write_zeroes": true, 00:08:52.142 "zcopy": true, 00:08:52.142 "get_zone_info": false, 00:08:52.142 "zone_management": false, 00:08:52.142 "zone_append": false, 00:08:52.142 "compare": false, 00:08:52.142 "compare_and_write": false, 00:08:52.142 "abort": true, 00:08:52.142 "seek_hole": false, 00:08:52.142 "seek_data": false, 00:08:52.142 "copy": true, 00:08:52.142 "nvme_iov_md": false 00:08:52.142 }, 00:08:52.142 "memory_domains": [ 00:08:52.142 { 00:08:52.142 "dma_device_id": "system", 00:08:52.142 "dma_device_type": 1 00:08:52.142 }, 00:08:52.142 { 00:08:52.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.142 "dma_device_type": 2 00:08:52.142 } 00:08:52.142 ], 00:08:52.142 "driver_specific": {} 00:08:52.142 } 00:08:52.142 ] 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:52.142 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.143 "name": "Existed_Raid", 00:08:52.143 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:52.143 "strip_size_kb": 64, 00:08:52.143 "state": "online", 00:08:52.143 "raid_level": "concat", 00:08:52.143 "superblock": true, 00:08:52.143 "num_base_bdevs": 3, 00:08:52.143 "num_base_bdevs_discovered": 3, 00:08:52.143 "num_base_bdevs_operational": 3, 00:08:52.143 "base_bdevs_list": [ 00:08:52.143 { 00:08:52.143 "name": "NewBaseBdev", 00:08:52.143 "uuid": "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510", 00:08:52.143 "is_configured": true, 00:08:52.143 "data_offset": 2048, 00:08:52.143 "data_size": 63488 00:08:52.143 }, 00:08:52.143 { 00:08:52.143 "name": "BaseBdev2", 00:08:52.143 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:52.143 "is_configured": true, 00:08:52.143 "data_offset": 2048, 00:08:52.143 "data_size": 63488 00:08:52.143 }, 00:08:52.143 { 00:08:52.143 "name": "BaseBdev3", 00:08:52.143 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:52.143 "is_configured": true, 00:08:52.143 "data_offset": 2048, 00:08:52.143 "data_size": 63488 00:08:52.143 } 00:08:52.143 ] 00:08:52.143 }' 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.143 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.713 [2024-11-16 18:49:35.953084] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.713 "name": "Existed_Raid", 00:08:52.713 "aliases": [ 00:08:52.713 "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0" 00:08:52.713 ], 00:08:52.713 "product_name": "Raid Volume", 00:08:52.713 "block_size": 512, 00:08:52.713 "num_blocks": 190464, 00:08:52.713 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:52.713 "assigned_rate_limits": { 00:08:52.713 "rw_ios_per_sec": 0, 00:08:52.713 "rw_mbytes_per_sec": 0, 00:08:52.713 "r_mbytes_per_sec": 0, 00:08:52.713 "w_mbytes_per_sec": 0 00:08:52.713 }, 00:08:52.713 "claimed": false, 00:08:52.713 "zoned": false, 00:08:52.713 "supported_io_types": { 00:08:52.713 "read": true, 00:08:52.713 "write": true, 00:08:52.713 "unmap": true, 00:08:52.713 "flush": true, 00:08:52.713 "reset": true, 00:08:52.713 "nvme_admin": false, 00:08:52.713 "nvme_io": false, 00:08:52.713 "nvme_io_md": false, 00:08:52.713 "write_zeroes": true, 00:08:52.713 "zcopy": false, 00:08:52.713 "get_zone_info": false, 00:08:52.713 "zone_management": false, 00:08:52.713 "zone_append": false, 00:08:52.713 "compare": false, 00:08:52.713 "compare_and_write": false, 00:08:52.713 "abort": false, 00:08:52.713 "seek_hole": false, 00:08:52.713 "seek_data": false, 00:08:52.713 "copy": false, 00:08:52.713 "nvme_iov_md": false 00:08:52.713 }, 00:08:52.713 "memory_domains": [ 00:08:52.713 { 00:08:52.713 "dma_device_id": "system", 00:08:52.713 "dma_device_type": 1 00:08:52.713 }, 00:08:52.713 { 00:08:52.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.713 "dma_device_type": 2 00:08:52.713 }, 00:08:52.713 { 00:08:52.713 "dma_device_id": "system", 00:08:52.713 "dma_device_type": 1 00:08:52.713 }, 00:08:52.713 { 00:08:52.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.713 "dma_device_type": 2 00:08:52.713 }, 00:08:52.713 { 00:08:52.713 "dma_device_id": "system", 00:08:52.713 "dma_device_type": 1 00:08:52.713 }, 00:08:52.713 { 00:08:52.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.713 "dma_device_type": 2 00:08:52.713 } 00:08:52.713 ], 00:08:52.713 "driver_specific": { 00:08:52.713 "raid": { 00:08:52.713 "uuid": "2b46dd89-fcd6-450a-aed0-bc4a0fc4dfb0", 00:08:52.713 "strip_size_kb": 64, 00:08:52.713 "state": "online", 00:08:52.713 "raid_level": "concat", 00:08:52.713 "superblock": true, 00:08:52.713 "num_base_bdevs": 3, 00:08:52.713 "num_base_bdevs_discovered": 3, 00:08:52.713 "num_base_bdevs_operational": 3, 00:08:52.713 "base_bdevs_list": [ 00:08:52.713 { 00:08:52.713 "name": "NewBaseBdev", 00:08:52.713 "uuid": "d9d6bfd5-c30a-49b5-9be4-6eff2e6d2510", 00:08:52.713 "is_configured": true, 00:08:52.713 "data_offset": 2048, 00:08:52.713 "data_size": 63488 00:08:52.713 }, 00:08:52.713 { 00:08:52.713 "name": "BaseBdev2", 00:08:52.713 "uuid": "00280b55-7f50-44e6-b9ff-883e7f977a32", 00:08:52.713 "is_configured": true, 00:08:52.713 "data_offset": 2048, 00:08:52.713 "data_size": 63488 00:08:52.713 }, 00:08:52.713 { 00:08:52.713 "name": "BaseBdev3", 00:08:52.713 "uuid": "b4d27aa5-bdfb-45ee-90eb-7e8a602f3036", 00:08:52.713 "is_configured": true, 00:08:52.713 "data_offset": 2048, 00:08:52.713 "data_size": 63488 00:08:52.713 } 00:08:52.713 ] 00:08:52.713 } 00:08:52.713 } 00:08:52.713 }' 00:08:52.713 18:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:52.713 BaseBdev2 00:08:52.713 BaseBdev3' 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.713 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:52.975 [2024-11-16 18:49:36.212318] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:52.975 [2024-11-16 18:49:36.212394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.975 [2024-11-16 18:49:36.212513] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.975 [2024-11-16 18:49:36.212597] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.975 [2024-11-16 18:49:36.212646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66052 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66052 ']' 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66052 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66052 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66052' 00:08:52.975 killing process with pid 66052 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66052 00:08:52.975 [2024-11-16 18:49:36.258432] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:52.975 18:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66052 00:08:53.249 [2024-11-16 18:49:36.551743] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:54.199 18:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:54.199 00:08:54.199 real 0m10.151s 00:08:54.199 user 0m16.124s 00:08:54.199 sys 0m1.789s 00:08:54.199 ************************************ 00:08:54.199 END TEST raid_state_function_test_sb 00:08:54.199 ************************************ 00:08:54.199 18:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.199 18:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:54.458 18:49:37 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:08:54.458 18:49:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:54.458 18:49:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.458 18:49:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:54.458 ************************************ 00:08:54.458 START TEST raid_superblock_test 00:08:54.458 ************************************ 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66667 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66667 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66667 ']' 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.458 18:49:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.458 [2024-11-16 18:49:37.808566] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:54.458 [2024-11-16 18:49:37.808816] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66667 ] 00:08:54.718 [2024-11-16 18:49:37.986999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.718 [2024-11-16 18:49:38.096308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.978 [2024-11-16 18:49:38.298803] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:54.978 [2024-11-16 18:49:38.298899] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.238 malloc1 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.238 [2024-11-16 18:49:38.676324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:55.238 [2024-11-16 18:49:38.676387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.238 [2024-11-16 18:49:38.676411] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:55.238 [2024-11-16 18:49:38.676420] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.238 [2024-11-16 18:49:38.678521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.238 [2024-11-16 18:49:38.678596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:55.238 pt1 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.238 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.498 malloc2 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.498 [2024-11-16 18:49:38.733726] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:55.498 [2024-11-16 18:49:38.733819] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.498 [2024-11-16 18:49:38.733872] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:55.498 [2024-11-16 18:49:38.733899] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.498 [2024-11-16 18:49:38.735908] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.498 [2024-11-16 18:49:38.735974] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:55.498 pt2 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.498 malloc3 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.498 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.499 [2024-11-16 18:49:38.802119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:55.499 [2024-11-16 18:49:38.802223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:55.499 [2024-11-16 18:49:38.802262] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:55.499 [2024-11-16 18:49:38.802289] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:55.499 [2024-11-16 18:49:38.804253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:55.499 [2024-11-16 18:49:38.804322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:55.499 pt3 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.499 [2024-11-16 18:49:38.814146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:55.499 [2024-11-16 18:49:38.815903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:55.499 [2024-11-16 18:49:38.815967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:55.499 [2024-11-16 18:49:38.816113] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:55.499 [2024-11-16 18:49:38.816127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:55.499 [2024-11-16 18:49:38.816358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:55.499 [2024-11-16 18:49:38.816523] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:55.499 [2024-11-16 18:49:38.816533] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:55.499 [2024-11-16 18:49:38.816687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:55.499 "name": "raid_bdev1", 00:08:55.499 "uuid": "010f9a79-fabe-47ba-9206-eb7a79250c3c", 00:08:55.499 "strip_size_kb": 64, 00:08:55.499 "state": "online", 00:08:55.499 "raid_level": "concat", 00:08:55.499 "superblock": true, 00:08:55.499 "num_base_bdevs": 3, 00:08:55.499 "num_base_bdevs_discovered": 3, 00:08:55.499 "num_base_bdevs_operational": 3, 00:08:55.499 "base_bdevs_list": [ 00:08:55.499 { 00:08:55.499 "name": "pt1", 00:08:55.499 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:55.499 "is_configured": true, 00:08:55.499 "data_offset": 2048, 00:08:55.499 "data_size": 63488 00:08:55.499 }, 00:08:55.499 { 00:08:55.499 "name": "pt2", 00:08:55.499 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.499 "is_configured": true, 00:08:55.499 "data_offset": 2048, 00:08:55.499 "data_size": 63488 00:08:55.499 }, 00:08:55.499 { 00:08:55.499 "name": "pt3", 00:08:55.499 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:55.499 "is_configured": true, 00:08:55.499 "data_offset": 2048, 00:08:55.499 "data_size": 63488 00:08:55.499 } 00:08:55.499 ] 00:08:55.499 }' 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:55.499 18:49:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.068 [2024-11-16 18:49:39.241714] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.068 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:56.068 "name": "raid_bdev1", 00:08:56.068 "aliases": [ 00:08:56.068 "010f9a79-fabe-47ba-9206-eb7a79250c3c" 00:08:56.068 ], 00:08:56.068 "product_name": "Raid Volume", 00:08:56.068 "block_size": 512, 00:08:56.068 "num_blocks": 190464, 00:08:56.068 "uuid": "010f9a79-fabe-47ba-9206-eb7a79250c3c", 00:08:56.068 "assigned_rate_limits": { 00:08:56.068 "rw_ios_per_sec": 0, 00:08:56.068 "rw_mbytes_per_sec": 0, 00:08:56.068 "r_mbytes_per_sec": 0, 00:08:56.068 "w_mbytes_per_sec": 0 00:08:56.068 }, 00:08:56.068 "claimed": false, 00:08:56.068 "zoned": false, 00:08:56.068 "supported_io_types": { 00:08:56.068 "read": true, 00:08:56.069 "write": true, 00:08:56.069 "unmap": true, 00:08:56.069 "flush": true, 00:08:56.069 "reset": true, 00:08:56.069 "nvme_admin": false, 00:08:56.069 "nvme_io": false, 00:08:56.069 "nvme_io_md": false, 00:08:56.069 "write_zeroes": true, 00:08:56.069 "zcopy": false, 00:08:56.069 "get_zone_info": false, 00:08:56.069 "zone_management": false, 00:08:56.069 "zone_append": false, 00:08:56.069 "compare": false, 00:08:56.069 "compare_and_write": false, 00:08:56.069 "abort": false, 00:08:56.069 "seek_hole": false, 00:08:56.069 "seek_data": false, 00:08:56.069 "copy": false, 00:08:56.069 "nvme_iov_md": false 00:08:56.069 }, 00:08:56.069 "memory_domains": [ 00:08:56.069 { 00:08:56.069 "dma_device_id": "system", 00:08:56.069 "dma_device_type": 1 00:08:56.069 }, 00:08:56.069 { 00:08:56.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.069 "dma_device_type": 2 00:08:56.069 }, 00:08:56.069 { 00:08:56.069 "dma_device_id": "system", 00:08:56.069 "dma_device_type": 1 00:08:56.069 }, 00:08:56.069 { 00:08:56.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.069 "dma_device_type": 2 00:08:56.069 }, 00:08:56.069 { 00:08:56.069 "dma_device_id": "system", 00:08:56.069 "dma_device_type": 1 00:08:56.069 }, 00:08:56.069 { 00:08:56.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:56.069 "dma_device_type": 2 00:08:56.069 } 00:08:56.069 ], 00:08:56.069 "driver_specific": { 00:08:56.069 "raid": { 00:08:56.069 "uuid": "010f9a79-fabe-47ba-9206-eb7a79250c3c", 00:08:56.069 "strip_size_kb": 64, 00:08:56.069 "state": "online", 00:08:56.069 "raid_level": "concat", 00:08:56.069 "superblock": true, 00:08:56.069 "num_base_bdevs": 3, 00:08:56.069 "num_base_bdevs_discovered": 3, 00:08:56.069 "num_base_bdevs_operational": 3, 00:08:56.069 "base_bdevs_list": [ 00:08:56.069 { 00:08:56.069 "name": "pt1", 00:08:56.069 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.069 "is_configured": true, 00:08:56.069 "data_offset": 2048, 00:08:56.069 "data_size": 63488 00:08:56.069 }, 00:08:56.069 { 00:08:56.069 "name": "pt2", 00:08:56.069 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.069 "is_configured": true, 00:08:56.069 "data_offset": 2048, 00:08:56.069 "data_size": 63488 00:08:56.069 }, 00:08:56.069 { 00:08:56.069 "name": "pt3", 00:08:56.069 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.069 "is_configured": true, 00:08:56.069 "data_offset": 2048, 00:08:56.069 "data_size": 63488 00:08:56.069 } 00:08:56.069 ] 00:08:56.069 } 00:08:56.069 } 00:08:56.069 }' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:56.069 pt2 00:08:56.069 pt3' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.069 [2024-11-16 18:49:39.501208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:56.069 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=010f9a79-fabe-47ba-9206-eb7a79250c3c 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 010f9a79-fabe-47ba-9206-eb7a79250c3c ']' 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.329 [2024-11-16 18:49:39.544867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.329 [2024-11-16 18:49:39.544892] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.329 [2024-11-16 18:49:39.544965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.329 [2024-11-16 18:49:39.545027] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.329 [2024-11-16 18:49:39.545036] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:56.329 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.330 [2024-11-16 18:49:39.692661] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:56.330 [2024-11-16 18:49:39.694478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:56.330 [2024-11-16 18:49:39.694524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:08:56.330 [2024-11-16 18:49:39.694571] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:56.330 [2024-11-16 18:49:39.694620] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:56.330 [2024-11-16 18:49:39.694639] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:08:56.330 [2024-11-16 18:49:39.694673] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.330 [2024-11-16 18:49:39.694683] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:56.330 request: 00:08:56.330 { 00:08:56.330 "name": "raid_bdev1", 00:08:56.330 "raid_level": "concat", 00:08:56.330 "base_bdevs": [ 00:08:56.330 "malloc1", 00:08:56.330 "malloc2", 00:08:56.330 "malloc3" 00:08:56.330 ], 00:08:56.330 "strip_size_kb": 64, 00:08:56.330 "superblock": false, 00:08:56.330 "method": "bdev_raid_create", 00:08:56.330 "req_id": 1 00:08:56.330 } 00:08:56.330 Got JSON-RPC error response 00:08:56.330 response: 00:08:56.330 { 00:08:56.330 "code": -17, 00:08:56.330 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:56.330 } 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.330 [2024-11-16 18:49:39.760479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:56.330 [2024-11-16 18:49:39.760572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.330 [2024-11-16 18:49:39.760608] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:08:56.330 [2024-11-16 18:49:39.760636] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.330 [2024-11-16 18:49:39.762940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.330 [2024-11-16 18:49:39.763004] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:56.330 [2024-11-16 18:49:39.763118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:56.330 [2024-11-16 18:49:39.763197] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:56.330 pt1 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.330 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.589 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.589 "name": "raid_bdev1", 00:08:56.589 "uuid": "010f9a79-fabe-47ba-9206-eb7a79250c3c", 00:08:56.589 "strip_size_kb": 64, 00:08:56.589 "state": "configuring", 00:08:56.589 "raid_level": "concat", 00:08:56.589 "superblock": true, 00:08:56.590 "num_base_bdevs": 3, 00:08:56.590 "num_base_bdevs_discovered": 1, 00:08:56.590 "num_base_bdevs_operational": 3, 00:08:56.590 "base_bdevs_list": [ 00:08:56.590 { 00:08:56.590 "name": "pt1", 00:08:56.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.590 "is_configured": true, 00:08:56.590 "data_offset": 2048, 00:08:56.590 "data_size": 63488 00:08:56.590 }, 00:08:56.590 { 00:08:56.590 "name": null, 00:08:56.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.590 "is_configured": false, 00:08:56.590 "data_offset": 2048, 00:08:56.590 "data_size": 63488 00:08:56.590 }, 00:08:56.590 { 00:08:56.590 "name": null, 00:08:56.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.590 "is_configured": false, 00:08:56.590 "data_offset": 2048, 00:08:56.590 "data_size": 63488 00:08:56.590 } 00:08:56.590 ] 00:08:56.590 }' 00:08:56.590 18:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.590 18:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.849 [2024-11-16 18:49:40.171801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:56.849 [2024-11-16 18:49:40.171868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:56.849 [2024-11-16 18:49:40.171890] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:08:56.849 [2024-11-16 18:49:40.171900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:56.849 [2024-11-16 18:49:40.172329] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:56.849 [2024-11-16 18:49:40.172344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:56.849 [2024-11-16 18:49:40.172425] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:56.849 [2024-11-16 18:49:40.172445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:56.849 pt2 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.849 [2024-11-16 18:49:40.183784] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:56.849 "name": "raid_bdev1", 00:08:56.849 "uuid": "010f9a79-fabe-47ba-9206-eb7a79250c3c", 00:08:56.849 "strip_size_kb": 64, 00:08:56.849 "state": "configuring", 00:08:56.849 "raid_level": "concat", 00:08:56.849 "superblock": true, 00:08:56.849 "num_base_bdevs": 3, 00:08:56.849 "num_base_bdevs_discovered": 1, 00:08:56.849 "num_base_bdevs_operational": 3, 00:08:56.849 "base_bdevs_list": [ 00:08:56.849 { 00:08:56.849 "name": "pt1", 00:08:56.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:56.849 "is_configured": true, 00:08:56.849 "data_offset": 2048, 00:08:56.849 "data_size": 63488 00:08:56.849 }, 00:08:56.849 { 00:08:56.849 "name": null, 00:08:56.849 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:56.849 "is_configured": false, 00:08:56.849 "data_offset": 0, 00:08:56.849 "data_size": 63488 00:08:56.849 }, 00:08:56.849 { 00:08:56.849 "name": null, 00:08:56.849 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:56.849 "is_configured": false, 00:08:56.849 "data_offset": 2048, 00:08:56.849 "data_size": 63488 00:08:56.849 } 00:08:56.849 ] 00:08:56.849 }' 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:56.849 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.418 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.419 [2024-11-16 18:49:40.654964] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:57.419 [2024-11-16 18:49:40.655081] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.419 [2024-11-16 18:49:40.655116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:08:57.419 [2024-11-16 18:49:40.655145] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.419 [2024-11-16 18:49:40.655634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.419 [2024-11-16 18:49:40.655711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:57.419 [2024-11-16 18:49:40.655819] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:57.419 [2024-11-16 18:49:40.655878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:57.419 pt2 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.419 [2024-11-16 18:49:40.666918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:08:57.419 [2024-11-16 18:49:40.666998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.419 [2024-11-16 18:49:40.667027] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:08:57.419 [2024-11-16 18:49:40.667055] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.419 [2024-11-16 18:49:40.667452] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.419 [2024-11-16 18:49:40.667510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:08:57.419 [2024-11-16 18:49:40.667598] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:08:57.419 [2024-11-16 18:49:40.667646] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:08:57.419 [2024-11-16 18:49:40.667810] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:57.419 [2024-11-16 18:49:40.667857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:57.419 [2024-11-16 18:49:40.668112] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:57.419 [2024-11-16 18:49:40.668295] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:57.419 [2024-11-16 18:49:40.668332] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:57.419 [2024-11-16 18:49:40.668494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.419 pt3 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.419 "name": "raid_bdev1", 00:08:57.419 "uuid": "010f9a79-fabe-47ba-9206-eb7a79250c3c", 00:08:57.419 "strip_size_kb": 64, 00:08:57.419 "state": "online", 00:08:57.419 "raid_level": "concat", 00:08:57.419 "superblock": true, 00:08:57.419 "num_base_bdevs": 3, 00:08:57.419 "num_base_bdevs_discovered": 3, 00:08:57.419 "num_base_bdevs_operational": 3, 00:08:57.419 "base_bdevs_list": [ 00:08:57.419 { 00:08:57.419 "name": "pt1", 00:08:57.419 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.419 "is_configured": true, 00:08:57.419 "data_offset": 2048, 00:08:57.419 "data_size": 63488 00:08:57.419 }, 00:08:57.419 { 00:08:57.419 "name": "pt2", 00:08:57.419 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.419 "is_configured": true, 00:08:57.419 "data_offset": 2048, 00:08:57.419 "data_size": 63488 00:08:57.419 }, 00:08:57.419 { 00:08:57.419 "name": "pt3", 00:08:57.419 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.419 "is_configured": true, 00:08:57.419 "data_offset": 2048, 00:08:57.419 "data_size": 63488 00:08:57.419 } 00:08:57.419 ] 00:08:57.419 }' 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.419 18:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:57.678 [2024-11-16 18:49:41.110455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.678 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:57.678 "name": "raid_bdev1", 00:08:57.678 "aliases": [ 00:08:57.678 "010f9a79-fabe-47ba-9206-eb7a79250c3c" 00:08:57.678 ], 00:08:57.678 "product_name": "Raid Volume", 00:08:57.678 "block_size": 512, 00:08:57.678 "num_blocks": 190464, 00:08:57.678 "uuid": "010f9a79-fabe-47ba-9206-eb7a79250c3c", 00:08:57.678 "assigned_rate_limits": { 00:08:57.678 "rw_ios_per_sec": 0, 00:08:57.678 "rw_mbytes_per_sec": 0, 00:08:57.678 "r_mbytes_per_sec": 0, 00:08:57.678 "w_mbytes_per_sec": 0 00:08:57.678 }, 00:08:57.678 "claimed": false, 00:08:57.678 "zoned": false, 00:08:57.678 "supported_io_types": { 00:08:57.678 "read": true, 00:08:57.678 "write": true, 00:08:57.678 "unmap": true, 00:08:57.678 "flush": true, 00:08:57.678 "reset": true, 00:08:57.678 "nvme_admin": false, 00:08:57.678 "nvme_io": false, 00:08:57.678 "nvme_io_md": false, 00:08:57.678 "write_zeroes": true, 00:08:57.678 "zcopy": false, 00:08:57.678 "get_zone_info": false, 00:08:57.678 "zone_management": false, 00:08:57.678 "zone_append": false, 00:08:57.678 "compare": false, 00:08:57.678 "compare_and_write": false, 00:08:57.678 "abort": false, 00:08:57.678 "seek_hole": false, 00:08:57.678 "seek_data": false, 00:08:57.678 "copy": false, 00:08:57.678 "nvme_iov_md": false 00:08:57.678 }, 00:08:57.678 "memory_domains": [ 00:08:57.678 { 00:08:57.678 "dma_device_id": "system", 00:08:57.678 "dma_device_type": 1 00:08:57.678 }, 00:08:57.678 { 00:08:57.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.678 "dma_device_type": 2 00:08:57.678 }, 00:08:57.678 { 00:08:57.678 "dma_device_id": "system", 00:08:57.678 "dma_device_type": 1 00:08:57.678 }, 00:08:57.678 { 00:08:57.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.678 "dma_device_type": 2 00:08:57.678 }, 00:08:57.678 { 00:08:57.678 "dma_device_id": "system", 00:08:57.678 "dma_device_type": 1 00:08:57.678 }, 00:08:57.678 { 00:08:57.678 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.678 "dma_device_type": 2 00:08:57.678 } 00:08:57.678 ], 00:08:57.678 "driver_specific": { 00:08:57.678 "raid": { 00:08:57.678 "uuid": "010f9a79-fabe-47ba-9206-eb7a79250c3c", 00:08:57.678 "strip_size_kb": 64, 00:08:57.678 "state": "online", 00:08:57.678 "raid_level": "concat", 00:08:57.678 "superblock": true, 00:08:57.678 "num_base_bdevs": 3, 00:08:57.678 "num_base_bdevs_discovered": 3, 00:08:57.678 "num_base_bdevs_operational": 3, 00:08:57.678 "base_bdevs_list": [ 00:08:57.678 { 00:08:57.678 "name": "pt1", 00:08:57.678 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:57.678 "is_configured": true, 00:08:57.678 "data_offset": 2048, 00:08:57.678 "data_size": 63488 00:08:57.678 }, 00:08:57.678 { 00:08:57.678 "name": "pt2", 00:08:57.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.678 "is_configured": true, 00:08:57.678 "data_offset": 2048, 00:08:57.679 "data_size": 63488 00:08:57.679 }, 00:08:57.679 { 00:08:57.679 "name": "pt3", 00:08:57.679 "uuid": "00000000-0000-0000-0000-000000000003", 00:08:57.679 "is_configured": true, 00:08:57.679 "data_offset": 2048, 00:08:57.679 "data_size": 63488 00:08:57.679 } 00:08:57.679 ] 00:08:57.679 } 00:08:57.679 } 00:08:57.679 }' 00:08:57.679 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:57.937 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:57.937 pt2 00:08:57.937 pt3' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.938 [2024-11-16 18:49:41.381987] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:57.938 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 010f9a79-fabe-47ba-9206-eb7a79250c3c '!=' 010f9a79-fabe-47ba-9206-eb7a79250c3c ']' 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66667 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66667 ']' 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66667 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66667 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66667' 00:08:58.197 killing process with pid 66667 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66667 00:08:58.197 [2024-11-16 18:49:41.464026] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:58.197 [2024-11-16 18:49:41.464174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.197 [2024-11-16 18:49:41.464240] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.197 [2024-11-16 18:49:41.464252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:58.197 18:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66667 00:08:58.457 [2024-11-16 18:49:41.762592] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.397 ************************************ 00:08:59.397 END TEST raid_superblock_test 00:08:59.397 ************************************ 00:08:59.397 18:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:59.397 00:08:59.397 real 0m5.143s 00:08:59.397 user 0m7.356s 00:08:59.397 sys 0m0.904s 00:08:59.397 18:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.397 18:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.666 18:49:42 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:08:59.666 18:49:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:59.666 18:49:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.666 18:49:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:59.666 ************************************ 00:08:59.666 START TEST raid_read_error_test 00:08:59.666 ************************************ 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nj8AlIfdxF 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66920 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66920 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66920 ']' 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.666 18:49:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.666 [2024-11-16 18:49:43.035176] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:59.666 [2024-11-16 18:49:43.035415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66920 ] 00:08:59.937 [2024-11-16 18:49:43.230071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.937 [2024-11-16 18:49:43.340918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.196 [2024-11-16 18:49:43.539575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.196 [2024-11-16 18:49:43.539609] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.457 BaseBdev1_malloc 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.457 true 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.457 [2024-11-16 18:49:43.912997] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:00.457 [2024-11-16 18:49:43.913048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.457 [2024-11-16 18:49:43.913067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:00.457 [2024-11-16 18:49:43.913078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.457 [2024-11-16 18:49:43.915132] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.457 [2024-11-16 18:49:43.915244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:00.457 BaseBdev1 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.457 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.716 BaseBdev2_malloc 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.716 true 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.716 [2024-11-16 18:49:43.976941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:00.716 [2024-11-16 18:49:43.977034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.716 [2024-11-16 18:49:43.977054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:00.716 [2024-11-16 18:49:43.977064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.716 [2024-11-16 18:49:43.979160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.716 [2024-11-16 18:49:43.979198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:00.716 BaseBdev2 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.716 18:49:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.716 BaseBdev3_malloc 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.716 true 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.716 [2024-11-16 18:49:44.056098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:00.716 [2024-11-16 18:49:44.056146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.716 [2024-11-16 18:49:44.056162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:00.716 [2024-11-16 18:49:44.056172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.716 [2024-11-16 18:49:44.058278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.716 [2024-11-16 18:49:44.058367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:00.716 BaseBdev3 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.716 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.716 [2024-11-16 18:49:44.068147] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.716 [2024-11-16 18:49:44.069985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.716 [2024-11-16 18:49:44.070066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.716 [2024-11-16 18:49:44.070267] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:00.717 [2024-11-16 18:49:44.070279] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:00.717 [2024-11-16 18:49:44.070530] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:00.717 [2024-11-16 18:49:44.070690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:00.717 [2024-11-16 18:49:44.070704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:00.717 [2024-11-16 18:49:44.070842] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.717 "name": "raid_bdev1", 00:09:00.717 "uuid": "ab40ba17-77f1-4a88-9ce7-24079311336f", 00:09:00.717 "strip_size_kb": 64, 00:09:00.717 "state": "online", 00:09:00.717 "raid_level": "concat", 00:09:00.717 "superblock": true, 00:09:00.717 "num_base_bdevs": 3, 00:09:00.717 "num_base_bdevs_discovered": 3, 00:09:00.717 "num_base_bdevs_operational": 3, 00:09:00.717 "base_bdevs_list": [ 00:09:00.717 { 00:09:00.717 "name": "BaseBdev1", 00:09:00.717 "uuid": "1ba16bcf-1796-5be8-be79-bab177373294", 00:09:00.717 "is_configured": true, 00:09:00.717 "data_offset": 2048, 00:09:00.717 "data_size": 63488 00:09:00.717 }, 00:09:00.717 { 00:09:00.717 "name": "BaseBdev2", 00:09:00.717 "uuid": "99b330f8-ff09-5ab1-b113-2a8d707dd3ce", 00:09:00.717 "is_configured": true, 00:09:00.717 "data_offset": 2048, 00:09:00.717 "data_size": 63488 00:09:00.717 }, 00:09:00.717 { 00:09:00.717 "name": "BaseBdev3", 00:09:00.717 "uuid": "6740edeb-3d1e-5e49-8888-1c79eabd01fb", 00:09:00.717 "is_configured": true, 00:09:00.717 "data_offset": 2048, 00:09:00.717 "data_size": 63488 00:09:00.717 } 00:09:00.717 ] 00:09:00.717 }' 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.717 18:49:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.286 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:01.286 18:49:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:01.286 [2024-11-16 18:49:44.628583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.227 "name": "raid_bdev1", 00:09:02.227 "uuid": "ab40ba17-77f1-4a88-9ce7-24079311336f", 00:09:02.227 "strip_size_kb": 64, 00:09:02.227 "state": "online", 00:09:02.227 "raid_level": "concat", 00:09:02.227 "superblock": true, 00:09:02.227 "num_base_bdevs": 3, 00:09:02.227 "num_base_bdevs_discovered": 3, 00:09:02.227 "num_base_bdevs_operational": 3, 00:09:02.227 "base_bdevs_list": [ 00:09:02.227 { 00:09:02.227 "name": "BaseBdev1", 00:09:02.227 "uuid": "1ba16bcf-1796-5be8-be79-bab177373294", 00:09:02.227 "is_configured": true, 00:09:02.227 "data_offset": 2048, 00:09:02.227 "data_size": 63488 00:09:02.227 }, 00:09:02.227 { 00:09:02.227 "name": "BaseBdev2", 00:09:02.227 "uuid": "99b330f8-ff09-5ab1-b113-2a8d707dd3ce", 00:09:02.227 "is_configured": true, 00:09:02.227 "data_offset": 2048, 00:09:02.227 "data_size": 63488 00:09:02.227 }, 00:09:02.227 { 00:09:02.227 "name": "BaseBdev3", 00:09:02.227 "uuid": "6740edeb-3d1e-5e49-8888-1c79eabd01fb", 00:09:02.227 "is_configured": true, 00:09:02.227 "data_offset": 2048, 00:09:02.227 "data_size": 63488 00:09:02.227 } 00:09:02.227 ] 00:09:02.227 }' 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.227 18:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.796 [2024-11-16 18:49:46.016544] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:02.796 [2024-11-16 18:49:46.016621] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:02.796 [2024-11-16 18:49:46.019386] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:02.796 [2024-11-16 18:49:46.019465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.796 [2024-11-16 18:49:46.019521] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:02.796 [2024-11-16 18:49:46.019563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:02.796 { 00:09:02.796 "results": [ 00:09:02.796 { 00:09:02.796 "job": "raid_bdev1", 00:09:02.796 "core_mask": "0x1", 00:09:02.796 "workload": "randrw", 00:09:02.796 "percentage": 50, 00:09:02.796 "status": "finished", 00:09:02.796 "queue_depth": 1, 00:09:02.796 "io_size": 131072, 00:09:02.796 "runtime": 1.38895, 00:09:02.796 "iops": 16434.71687245761, 00:09:02.796 "mibps": 2054.3396090572014, 00:09:02.796 "io_failed": 1, 00:09:02.796 "io_timeout": 0, 00:09:02.796 "avg_latency_us": 84.55563725846524, 00:09:02.796 "min_latency_us": 24.370305676855896, 00:09:02.796 "max_latency_us": 1366.5257641921398 00:09:02.796 } 00:09:02.796 ], 00:09:02.796 "core_count": 1 00:09:02.796 } 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66920 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66920 ']' 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66920 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66920 00:09:02.796 killing process with pid 66920 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66920' 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66920 00:09:02.796 [2024-11-16 18:49:46.069214] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:02.796 18:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66920 00:09:03.055 [2024-11-16 18:49:46.287019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nj8AlIfdxF 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:03.993 00:09:03.993 real 0m4.501s 00:09:03.993 user 0m5.345s 00:09:03.993 sys 0m0.586s 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.993 18:49:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.993 ************************************ 00:09:03.993 END TEST raid_read_error_test 00:09:03.993 ************************************ 00:09:04.254 18:49:47 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:04.254 18:49:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:04.254 18:49:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:04.254 18:49:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:04.254 ************************************ 00:09:04.254 START TEST raid_write_error_test 00:09:04.254 ************************************ 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.SeVegm7egE 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67066 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67066 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67066 ']' 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:04.254 18:49:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.254 [2024-11-16 18:49:47.602424] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:04.254 [2024-11-16 18:49:47.602637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67066 ] 00:09:04.514 [2024-11-16 18:49:47.770286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:04.514 [2024-11-16 18:49:47.885459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.774 [2024-11-16 18:49:48.083276] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:04.774 [2024-11-16 18:49:48.083436] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:05.034 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.034 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:05.034 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.035 BaseBdev1_malloc 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.035 true 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.035 [2024-11-16 18:49:48.488201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:05.035 [2024-11-16 18:49:48.488256] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.035 [2024-11-16 18:49:48.488276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:05.035 [2024-11-16 18:49:48.488287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.035 [2024-11-16 18:49:48.490373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.035 [2024-11-16 18:49:48.490413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:05.035 BaseBdev1 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.035 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.295 BaseBdev2_malloc 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.295 true 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.295 [2024-11-16 18:49:48.557500] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:05.295 [2024-11-16 18:49:48.557551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.295 [2024-11-16 18:49:48.557568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:05.295 [2024-11-16 18:49:48.557578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.295 [2024-11-16 18:49:48.559702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.295 [2024-11-16 18:49:48.559739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:05.295 BaseBdev2 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.295 BaseBdev3_malloc 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.295 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.295 true 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.296 [2024-11-16 18:49:48.633785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:05.296 [2024-11-16 18:49:48.633837] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:05.296 [2024-11-16 18:49:48.633853] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:05.296 [2024-11-16 18:49:48.633863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:05.296 [2024-11-16 18:49:48.635880] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:05.296 [2024-11-16 18:49:48.635997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:05.296 BaseBdev3 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.296 [2024-11-16 18:49:48.645832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:05.296 [2024-11-16 18:49:48.647627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:05.296 [2024-11-16 18:49:48.647719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:05.296 [2024-11-16 18:49:48.647923] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:05.296 [2024-11-16 18:49:48.647936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:05.296 [2024-11-16 18:49:48.648167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:05.296 [2024-11-16 18:49:48.648320] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:05.296 [2024-11-16 18:49:48.648333] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:05.296 [2024-11-16 18:49:48.648473] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:05.296 "name": "raid_bdev1", 00:09:05.296 "uuid": "8e4dd7fc-c902-4f87-b66b-9a99ef6cda88", 00:09:05.296 "strip_size_kb": 64, 00:09:05.296 "state": "online", 00:09:05.296 "raid_level": "concat", 00:09:05.296 "superblock": true, 00:09:05.296 "num_base_bdevs": 3, 00:09:05.296 "num_base_bdevs_discovered": 3, 00:09:05.296 "num_base_bdevs_operational": 3, 00:09:05.296 "base_bdevs_list": [ 00:09:05.296 { 00:09:05.296 "name": "BaseBdev1", 00:09:05.296 "uuid": "fa014522-04d7-5279-95d9-1db36032a5fa", 00:09:05.296 "is_configured": true, 00:09:05.296 "data_offset": 2048, 00:09:05.296 "data_size": 63488 00:09:05.296 }, 00:09:05.296 { 00:09:05.296 "name": "BaseBdev2", 00:09:05.296 "uuid": "dbaf0659-806f-5800-b7b0-b4d15c6f764a", 00:09:05.296 "is_configured": true, 00:09:05.296 "data_offset": 2048, 00:09:05.296 "data_size": 63488 00:09:05.296 }, 00:09:05.296 { 00:09:05.296 "name": "BaseBdev3", 00:09:05.296 "uuid": "9093f62f-0871-5b53-8836-06ae11177301", 00:09:05.296 "is_configured": true, 00:09:05.296 "data_offset": 2048, 00:09:05.296 "data_size": 63488 00:09:05.296 } 00:09:05.296 ] 00:09:05.296 }' 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:05.296 18:49:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.868 18:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:05.869 18:49:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:05.869 [2024-11-16 18:49:49.190061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:06.816 "name": "raid_bdev1", 00:09:06.816 "uuid": "8e4dd7fc-c902-4f87-b66b-9a99ef6cda88", 00:09:06.816 "strip_size_kb": 64, 00:09:06.816 "state": "online", 00:09:06.816 "raid_level": "concat", 00:09:06.816 "superblock": true, 00:09:06.816 "num_base_bdevs": 3, 00:09:06.816 "num_base_bdevs_discovered": 3, 00:09:06.816 "num_base_bdevs_operational": 3, 00:09:06.816 "base_bdevs_list": [ 00:09:06.816 { 00:09:06.816 "name": "BaseBdev1", 00:09:06.816 "uuid": "fa014522-04d7-5279-95d9-1db36032a5fa", 00:09:06.816 "is_configured": true, 00:09:06.816 "data_offset": 2048, 00:09:06.816 "data_size": 63488 00:09:06.816 }, 00:09:06.816 { 00:09:06.816 "name": "BaseBdev2", 00:09:06.816 "uuid": "dbaf0659-806f-5800-b7b0-b4d15c6f764a", 00:09:06.816 "is_configured": true, 00:09:06.816 "data_offset": 2048, 00:09:06.816 "data_size": 63488 00:09:06.816 }, 00:09:06.816 { 00:09:06.816 "name": "BaseBdev3", 00:09:06.816 "uuid": "9093f62f-0871-5b53-8836-06ae11177301", 00:09:06.816 "is_configured": true, 00:09:06.816 "data_offset": 2048, 00:09:06.816 "data_size": 63488 00:09:06.816 } 00:09:06.816 ] 00:09:06.816 }' 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:06.816 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.402 [2024-11-16 18:49:50.569996] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:07.402 [2024-11-16 18:49:50.570025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:07.402 [2024-11-16 18:49:50.572686] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:07.402 [2024-11-16 18:49:50.572733] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.402 [2024-11-16 18:49:50.572768] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:07.402 [2024-11-16 18:49:50.572779] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:07.402 { 00:09:07.402 "results": [ 00:09:07.402 { 00:09:07.402 "job": "raid_bdev1", 00:09:07.402 "core_mask": "0x1", 00:09:07.402 "workload": "randrw", 00:09:07.402 "percentage": 50, 00:09:07.402 "status": "finished", 00:09:07.402 "queue_depth": 1, 00:09:07.402 "io_size": 131072, 00:09:07.402 "runtime": 1.380686, 00:09:07.402 "iops": 16588.130827718975, 00:09:07.402 "mibps": 2073.516353464872, 00:09:07.402 "io_failed": 1, 00:09:07.402 "io_timeout": 0, 00:09:07.402 "avg_latency_us": 83.81773172855907, 00:09:07.402 "min_latency_us": 24.705676855895195, 00:09:07.402 "max_latency_us": 1445.2262008733624 00:09:07.402 } 00:09:07.402 ], 00:09:07.402 "core_count": 1 00:09:07.402 } 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67066 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67066 ']' 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67066 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67066 00:09:07.402 killing process with pid 67066 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67066' 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67066 00:09:07.402 [2024-11-16 18:49:50.617257] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:07.402 18:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67066 00:09:07.402 [2024-11-16 18:49:50.831673] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:08.782 18:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.SeVegm7egE 00:09:08.782 18:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:08.782 18:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:08.782 18:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:09:08.782 18:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:08.783 18:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:08.783 18:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:08.783 18:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:09:08.783 00:09:08.783 real 0m4.446s 00:09:08.783 user 0m5.270s 00:09:08.783 sys 0m0.590s 00:09:08.783 18:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.783 18:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.783 ************************************ 00:09:08.783 END TEST raid_write_error_test 00:09:08.783 ************************************ 00:09:08.783 18:49:52 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:08.783 18:49:52 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:08.783 18:49:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:08.783 18:49:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.783 18:49:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:08.783 ************************************ 00:09:08.783 START TEST raid_state_function_test 00:09:08.783 ************************************ 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67204 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67204' 00:09:08.783 Process raid pid: 67204 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67204 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67204 ']' 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.783 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.783 [2024-11-16 18:49:52.116544] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:08.783 [2024-11-16 18:49:52.116783] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.043 [2024-11-16 18:49:52.275717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.043 [2024-11-16 18:49:52.388088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.303 [2024-11-16 18:49:52.586457] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.303 [2024-11-16 18:49:52.586499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.563 [2024-11-16 18:49:52.936388] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:09.563 [2024-11-16 18:49:52.936517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:09.563 [2024-11-16 18:49:52.936532] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:09.563 [2024-11-16 18:49:52.936543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:09.563 [2024-11-16 18:49:52.936549] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:09.563 [2024-11-16 18:49:52.936558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.563 "name": "Existed_Raid", 00:09:09.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.563 "strip_size_kb": 0, 00:09:09.563 "state": "configuring", 00:09:09.563 "raid_level": "raid1", 00:09:09.563 "superblock": false, 00:09:09.563 "num_base_bdevs": 3, 00:09:09.563 "num_base_bdevs_discovered": 0, 00:09:09.563 "num_base_bdevs_operational": 3, 00:09:09.563 "base_bdevs_list": [ 00:09:09.563 { 00:09:09.563 "name": "BaseBdev1", 00:09:09.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.563 "is_configured": false, 00:09:09.563 "data_offset": 0, 00:09:09.563 "data_size": 0 00:09:09.563 }, 00:09:09.563 { 00:09:09.563 "name": "BaseBdev2", 00:09:09.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.563 "is_configured": false, 00:09:09.563 "data_offset": 0, 00:09:09.563 "data_size": 0 00:09:09.563 }, 00:09:09.563 { 00:09:09.563 "name": "BaseBdev3", 00:09:09.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.563 "is_configured": false, 00:09:09.563 "data_offset": 0, 00:09:09.563 "data_size": 0 00:09:09.563 } 00:09:09.563 ] 00:09:09.563 }' 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.563 18:49:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.134 [2024-11-16 18:49:53.359621] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.134 [2024-11-16 18:49:53.359734] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.134 [2024-11-16 18:49:53.371565] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.134 [2024-11-16 18:49:53.371673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.134 [2024-11-16 18:49:53.371704] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.134 [2024-11-16 18:49:53.371727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.134 [2024-11-16 18:49:53.371745] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.134 [2024-11-16 18:49:53.371766] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.134 [2024-11-16 18:49:53.417619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.134 BaseBdev1 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.134 [ 00:09:10.134 { 00:09:10.134 "name": "BaseBdev1", 00:09:10.134 "aliases": [ 00:09:10.134 "e0879ea7-332d-4868-afc0-181821599a56" 00:09:10.134 ], 00:09:10.134 "product_name": "Malloc disk", 00:09:10.134 "block_size": 512, 00:09:10.134 "num_blocks": 65536, 00:09:10.134 "uuid": "e0879ea7-332d-4868-afc0-181821599a56", 00:09:10.134 "assigned_rate_limits": { 00:09:10.134 "rw_ios_per_sec": 0, 00:09:10.134 "rw_mbytes_per_sec": 0, 00:09:10.134 "r_mbytes_per_sec": 0, 00:09:10.134 "w_mbytes_per_sec": 0 00:09:10.134 }, 00:09:10.134 "claimed": true, 00:09:10.134 "claim_type": "exclusive_write", 00:09:10.134 "zoned": false, 00:09:10.134 "supported_io_types": { 00:09:10.134 "read": true, 00:09:10.134 "write": true, 00:09:10.134 "unmap": true, 00:09:10.134 "flush": true, 00:09:10.134 "reset": true, 00:09:10.134 "nvme_admin": false, 00:09:10.134 "nvme_io": false, 00:09:10.134 "nvme_io_md": false, 00:09:10.134 "write_zeroes": true, 00:09:10.134 "zcopy": true, 00:09:10.134 "get_zone_info": false, 00:09:10.134 "zone_management": false, 00:09:10.134 "zone_append": false, 00:09:10.134 "compare": false, 00:09:10.134 "compare_and_write": false, 00:09:10.134 "abort": true, 00:09:10.134 "seek_hole": false, 00:09:10.134 "seek_data": false, 00:09:10.134 "copy": true, 00:09:10.134 "nvme_iov_md": false 00:09:10.134 }, 00:09:10.134 "memory_domains": [ 00:09:10.134 { 00:09:10.134 "dma_device_id": "system", 00:09:10.134 "dma_device_type": 1 00:09:10.134 }, 00:09:10.134 { 00:09:10.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.134 "dma_device_type": 2 00:09:10.134 } 00:09:10.134 ], 00:09:10.134 "driver_specific": {} 00:09:10.134 } 00:09:10.134 ] 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.134 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.134 "name": "Existed_Raid", 00:09:10.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.134 "strip_size_kb": 0, 00:09:10.134 "state": "configuring", 00:09:10.134 "raid_level": "raid1", 00:09:10.134 "superblock": false, 00:09:10.134 "num_base_bdevs": 3, 00:09:10.134 "num_base_bdevs_discovered": 1, 00:09:10.134 "num_base_bdevs_operational": 3, 00:09:10.134 "base_bdevs_list": [ 00:09:10.134 { 00:09:10.134 "name": "BaseBdev1", 00:09:10.134 "uuid": "e0879ea7-332d-4868-afc0-181821599a56", 00:09:10.134 "is_configured": true, 00:09:10.134 "data_offset": 0, 00:09:10.134 "data_size": 65536 00:09:10.134 }, 00:09:10.134 { 00:09:10.134 "name": "BaseBdev2", 00:09:10.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.134 "is_configured": false, 00:09:10.134 "data_offset": 0, 00:09:10.134 "data_size": 0 00:09:10.134 }, 00:09:10.134 { 00:09:10.134 "name": "BaseBdev3", 00:09:10.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.134 "is_configured": false, 00:09:10.134 "data_offset": 0, 00:09:10.134 "data_size": 0 00:09:10.134 } 00:09:10.134 ] 00:09:10.135 }' 00:09:10.135 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.135 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.395 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:10.395 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.395 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.655 [2024-11-16 18:49:53.868892] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:10.655 [2024-11-16 18:49:53.868947] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.655 [2024-11-16 18:49:53.880904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.655 [2024-11-16 18:49:53.882702] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:10.655 [2024-11-16 18:49:53.882774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:10.655 [2024-11-16 18:49:53.882802] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:10.655 [2024-11-16 18:49:53.882824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.655 "name": "Existed_Raid", 00:09:10.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.655 "strip_size_kb": 0, 00:09:10.655 "state": "configuring", 00:09:10.655 "raid_level": "raid1", 00:09:10.655 "superblock": false, 00:09:10.655 "num_base_bdevs": 3, 00:09:10.655 "num_base_bdevs_discovered": 1, 00:09:10.655 "num_base_bdevs_operational": 3, 00:09:10.655 "base_bdevs_list": [ 00:09:10.655 { 00:09:10.655 "name": "BaseBdev1", 00:09:10.655 "uuid": "e0879ea7-332d-4868-afc0-181821599a56", 00:09:10.655 "is_configured": true, 00:09:10.655 "data_offset": 0, 00:09:10.655 "data_size": 65536 00:09:10.655 }, 00:09:10.655 { 00:09:10.655 "name": "BaseBdev2", 00:09:10.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.655 "is_configured": false, 00:09:10.655 "data_offset": 0, 00:09:10.655 "data_size": 0 00:09:10.655 }, 00:09:10.655 { 00:09:10.655 "name": "BaseBdev3", 00:09:10.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.655 "is_configured": false, 00:09:10.655 "data_offset": 0, 00:09:10.655 "data_size": 0 00:09:10.655 } 00:09:10.655 ] 00:09:10.655 }' 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.655 18:49:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.915 [2024-11-16 18:49:54.330956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.915 BaseBdev2 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.915 [ 00:09:10.915 { 00:09:10.915 "name": "BaseBdev2", 00:09:10.915 "aliases": [ 00:09:10.915 "fe1c5701-a957-4977-bfad-fe0f84ec6849" 00:09:10.915 ], 00:09:10.915 "product_name": "Malloc disk", 00:09:10.915 "block_size": 512, 00:09:10.915 "num_blocks": 65536, 00:09:10.915 "uuid": "fe1c5701-a957-4977-bfad-fe0f84ec6849", 00:09:10.915 "assigned_rate_limits": { 00:09:10.915 "rw_ios_per_sec": 0, 00:09:10.915 "rw_mbytes_per_sec": 0, 00:09:10.915 "r_mbytes_per_sec": 0, 00:09:10.915 "w_mbytes_per_sec": 0 00:09:10.915 }, 00:09:10.915 "claimed": true, 00:09:10.915 "claim_type": "exclusive_write", 00:09:10.915 "zoned": false, 00:09:10.915 "supported_io_types": { 00:09:10.915 "read": true, 00:09:10.915 "write": true, 00:09:10.915 "unmap": true, 00:09:10.915 "flush": true, 00:09:10.915 "reset": true, 00:09:10.915 "nvme_admin": false, 00:09:10.915 "nvme_io": false, 00:09:10.915 "nvme_io_md": false, 00:09:10.915 "write_zeroes": true, 00:09:10.915 "zcopy": true, 00:09:10.915 "get_zone_info": false, 00:09:10.915 "zone_management": false, 00:09:10.915 "zone_append": false, 00:09:10.915 "compare": false, 00:09:10.915 "compare_and_write": false, 00:09:10.915 "abort": true, 00:09:10.915 "seek_hole": false, 00:09:10.915 "seek_data": false, 00:09:10.915 "copy": true, 00:09:10.915 "nvme_iov_md": false 00:09:10.915 }, 00:09:10.915 "memory_domains": [ 00:09:10.915 { 00:09:10.915 "dma_device_id": "system", 00:09:10.915 "dma_device_type": 1 00:09:10.915 }, 00:09:10.915 { 00:09:10.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.915 "dma_device_type": 2 00:09:10.915 } 00:09:10.915 ], 00:09:10.915 "driver_specific": {} 00:09:10.915 } 00:09:10.915 ] 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.915 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.916 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.916 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.916 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.916 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.916 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.916 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.175 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.175 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.175 "name": "Existed_Raid", 00:09:11.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.175 "strip_size_kb": 0, 00:09:11.175 "state": "configuring", 00:09:11.175 "raid_level": "raid1", 00:09:11.175 "superblock": false, 00:09:11.175 "num_base_bdevs": 3, 00:09:11.175 "num_base_bdevs_discovered": 2, 00:09:11.175 "num_base_bdevs_operational": 3, 00:09:11.175 "base_bdevs_list": [ 00:09:11.175 { 00:09:11.175 "name": "BaseBdev1", 00:09:11.175 "uuid": "e0879ea7-332d-4868-afc0-181821599a56", 00:09:11.175 "is_configured": true, 00:09:11.175 "data_offset": 0, 00:09:11.175 "data_size": 65536 00:09:11.175 }, 00:09:11.175 { 00:09:11.175 "name": "BaseBdev2", 00:09:11.175 "uuid": "fe1c5701-a957-4977-bfad-fe0f84ec6849", 00:09:11.175 "is_configured": true, 00:09:11.175 "data_offset": 0, 00:09:11.175 "data_size": 65536 00:09:11.175 }, 00:09:11.175 { 00:09:11.175 "name": "BaseBdev3", 00:09:11.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.175 "is_configured": false, 00:09:11.175 "data_offset": 0, 00:09:11.175 "data_size": 0 00:09:11.175 } 00:09:11.175 ] 00:09:11.175 }' 00:09:11.175 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.175 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.435 [2024-11-16 18:49:54.868814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.435 [2024-11-16 18:49:54.868931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:11.435 [2024-11-16 18:49:54.868950] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:11.435 [2024-11-16 18:49:54.869249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:11.435 [2024-11-16 18:49:54.869415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:11.435 [2024-11-16 18:49:54.869424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:11.435 [2024-11-16 18:49:54.869700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.435 BaseBdev3 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.435 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.435 [ 00:09:11.435 { 00:09:11.435 "name": "BaseBdev3", 00:09:11.435 "aliases": [ 00:09:11.435 "29fe51c2-b992-4ae3-834c-77ea39fb68f7" 00:09:11.435 ], 00:09:11.435 "product_name": "Malloc disk", 00:09:11.435 "block_size": 512, 00:09:11.435 "num_blocks": 65536, 00:09:11.435 "uuid": "29fe51c2-b992-4ae3-834c-77ea39fb68f7", 00:09:11.435 "assigned_rate_limits": { 00:09:11.435 "rw_ios_per_sec": 0, 00:09:11.435 "rw_mbytes_per_sec": 0, 00:09:11.435 "r_mbytes_per_sec": 0, 00:09:11.435 "w_mbytes_per_sec": 0 00:09:11.435 }, 00:09:11.435 "claimed": true, 00:09:11.435 "claim_type": "exclusive_write", 00:09:11.435 "zoned": false, 00:09:11.435 "supported_io_types": { 00:09:11.435 "read": true, 00:09:11.435 "write": true, 00:09:11.435 "unmap": true, 00:09:11.435 "flush": true, 00:09:11.435 "reset": true, 00:09:11.435 "nvme_admin": false, 00:09:11.435 "nvme_io": false, 00:09:11.435 "nvme_io_md": false, 00:09:11.435 "write_zeroes": true, 00:09:11.435 "zcopy": true, 00:09:11.435 "get_zone_info": false, 00:09:11.435 "zone_management": false, 00:09:11.435 "zone_append": false, 00:09:11.435 "compare": false, 00:09:11.435 "compare_and_write": false, 00:09:11.435 "abort": true, 00:09:11.435 "seek_hole": false, 00:09:11.435 "seek_data": false, 00:09:11.435 "copy": true, 00:09:11.435 "nvme_iov_md": false 00:09:11.435 }, 00:09:11.435 "memory_domains": [ 00:09:11.435 { 00:09:11.435 "dma_device_id": "system", 00:09:11.694 "dma_device_type": 1 00:09:11.694 }, 00:09:11.694 { 00:09:11.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.694 "dma_device_type": 2 00:09:11.694 } 00:09:11.694 ], 00:09:11.694 "driver_specific": {} 00:09:11.694 } 00:09:11.694 ] 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.694 "name": "Existed_Raid", 00:09:11.694 "uuid": "65a017e3-d35c-40d5-b315-bc05e871b2ed", 00:09:11.694 "strip_size_kb": 0, 00:09:11.694 "state": "online", 00:09:11.694 "raid_level": "raid1", 00:09:11.694 "superblock": false, 00:09:11.694 "num_base_bdevs": 3, 00:09:11.694 "num_base_bdevs_discovered": 3, 00:09:11.694 "num_base_bdevs_operational": 3, 00:09:11.694 "base_bdevs_list": [ 00:09:11.694 { 00:09:11.694 "name": "BaseBdev1", 00:09:11.694 "uuid": "e0879ea7-332d-4868-afc0-181821599a56", 00:09:11.694 "is_configured": true, 00:09:11.694 "data_offset": 0, 00:09:11.694 "data_size": 65536 00:09:11.694 }, 00:09:11.694 { 00:09:11.694 "name": "BaseBdev2", 00:09:11.694 "uuid": "fe1c5701-a957-4977-bfad-fe0f84ec6849", 00:09:11.694 "is_configured": true, 00:09:11.694 "data_offset": 0, 00:09:11.694 "data_size": 65536 00:09:11.694 }, 00:09:11.694 { 00:09:11.694 "name": "BaseBdev3", 00:09:11.694 "uuid": "29fe51c2-b992-4ae3-834c-77ea39fb68f7", 00:09:11.694 "is_configured": true, 00:09:11.694 "data_offset": 0, 00:09:11.694 "data_size": 65536 00:09:11.694 } 00:09:11.694 ] 00:09:11.694 }' 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.694 18:49:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.954 [2024-11-16 18:49:55.364404] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:11.954 "name": "Existed_Raid", 00:09:11.954 "aliases": [ 00:09:11.954 "65a017e3-d35c-40d5-b315-bc05e871b2ed" 00:09:11.954 ], 00:09:11.954 "product_name": "Raid Volume", 00:09:11.954 "block_size": 512, 00:09:11.954 "num_blocks": 65536, 00:09:11.954 "uuid": "65a017e3-d35c-40d5-b315-bc05e871b2ed", 00:09:11.954 "assigned_rate_limits": { 00:09:11.954 "rw_ios_per_sec": 0, 00:09:11.954 "rw_mbytes_per_sec": 0, 00:09:11.954 "r_mbytes_per_sec": 0, 00:09:11.954 "w_mbytes_per_sec": 0 00:09:11.954 }, 00:09:11.954 "claimed": false, 00:09:11.954 "zoned": false, 00:09:11.954 "supported_io_types": { 00:09:11.954 "read": true, 00:09:11.954 "write": true, 00:09:11.954 "unmap": false, 00:09:11.954 "flush": false, 00:09:11.954 "reset": true, 00:09:11.954 "nvme_admin": false, 00:09:11.954 "nvme_io": false, 00:09:11.954 "nvme_io_md": false, 00:09:11.954 "write_zeroes": true, 00:09:11.954 "zcopy": false, 00:09:11.954 "get_zone_info": false, 00:09:11.954 "zone_management": false, 00:09:11.954 "zone_append": false, 00:09:11.954 "compare": false, 00:09:11.954 "compare_and_write": false, 00:09:11.954 "abort": false, 00:09:11.954 "seek_hole": false, 00:09:11.954 "seek_data": false, 00:09:11.954 "copy": false, 00:09:11.954 "nvme_iov_md": false 00:09:11.954 }, 00:09:11.954 "memory_domains": [ 00:09:11.954 { 00:09:11.954 "dma_device_id": "system", 00:09:11.954 "dma_device_type": 1 00:09:11.954 }, 00:09:11.954 { 00:09:11.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.954 "dma_device_type": 2 00:09:11.954 }, 00:09:11.954 { 00:09:11.954 "dma_device_id": "system", 00:09:11.954 "dma_device_type": 1 00:09:11.954 }, 00:09:11.954 { 00:09:11.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.954 "dma_device_type": 2 00:09:11.954 }, 00:09:11.954 { 00:09:11.954 "dma_device_id": "system", 00:09:11.954 "dma_device_type": 1 00:09:11.954 }, 00:09:11.954 { 00:09:11.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.954 "dma_device_type": 2 00:09:11.954 } 00:09:11.954 ], 00:09:11.954 "driver_specific": { 00:09:11.954 "raid": { 00:09:11.954 "uuid": "65a017e3-d35c-40d5-b315-bc05e871b2ed", 00:09:11.954 "strip_size_kb": 0, 00:09:11.954 "state": "online", 00:09:11.954 "raid_level": "raid1", 00:09:11.954 "superblock": false, 00:09:11.954 "num_base_bdevs": 3, 00:09:11.954 "num_base_bdevs_discovered": 3, 00:09:11.954 "num_base_bdevs_operational": 3, 00:09:11.954 "base_bdevs_list": [ 00:09:11.954 { 00:09:11.954 "name": "BaseBdev1", 00:09:11.954 "uuid": "e0879ea7-332d-4868-afc0-181821599a56", 00:09:11.954 "is_configured": true, 00:09:11.954 "data_offset": 0, 00:09:11.954 "data_size": 65536 00:09:11.954 }, 00:09:11.954 { 00:09:11.954 "name": "BaseBdev2", 00:09:11.954 "uuid": "fe1c5701-a957-4977-bfad-fe0f84ec6849", 00:09:11.954 "is_configured": true, 00:09:11.954 "data_offset": 0, 00:09:11.954 "data_size": 65536 00:09:11.954 }, 00:09:11.954 { 00:09:11.954 "name": "BaseBdev3", 00:09:11.954 "uuid": "29fe51c2-b992-4ae3-834c-77ea39fb68f7", 00:09:11.954 "is_configured": true, 00:09:11.954 "data_offset": 0, 00:09:11.954 "data_size": 65536 00:09:11.954 } 00:09:11.954 ] 00:09:11.954 } 00:09:11.954 } 00:09:11.954 }' 00:09:11.954 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:12.214 BaseBdev2 00:09:12.214 BaseBdev3' 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.214 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.215 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.215 [2024-11-16 18:49:55.603759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.474 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.475 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.475 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.475 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.475 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.475 "name": "Existed_Raid", 00:09:12.475 "uuid": "65a017e3-d35c-40d5-b315-bc05e871b2ed", 00:09:12.475 "strip_size_kb": 0, 00:09:12.475 "state": "online", 00:09:12.475 "raid_level": "raid1", 00:09:12.475 "superblock": false, 00:09:12.475 "num_base_bdevs": 3, 00:09:12.475 "num_base_bdevs_discovered": 2, 00:09:12.475 "num_base_bdevs_operational": 2, 00:09:12.475 "base_bdevs_list": [ 00:09:12.475 { 00:09:12.475 "name": null, 00:09:12.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.475 "is_configured": false, 00:09:12.475 "data_offset": 0, 00:09:12.475 "data_size": 65536 00:09:12.475 }, 00:09:12.475 { 00:09:12.475 "name": "BaseBdev2", 00:09:12.475 "uuid": "fe1c5701-a957-4977-bfad-fe0f84ec6849", 00:09:12.475 "is_configured": true, 00:09:12.475 "data_offset": 0, 00:09:12.475 "data_size": 65536 00:09:12.475 }, 00:09:12.475 { 00:09:12.475 "name": "BaseBdev3", 00:09:12.475 "uuid": "29fe51c2-b992-4ae3-834c-77ea39fb68f7", 00:09:12.475 "is_configured": true, 00:09:12.475 "data_offset": 0, 00:09:12.475 "data_size": 65536 00:09:12.475 } 00:09:12.475 ] 00:09:12.475 }' 00:09:12.475 18:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.475 18:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.735 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.735 [2024-11-16 18:49:56.187195] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.995 [2024-11-16 18:49:56.353593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.995 [2024-11-16 18:49:56.353817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:12.995 [2024-11-16 18:49:56.457889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:12.995 [2024-11-16 18:49:56.458016] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:12.995 [2024-11-16 18:49:56.458062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.995 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.255 BaseBdev2 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.255 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.255 [ 00:09:13.255 { 00:09:13.255 "name": "BaseBdev2", 00:09:13.255 "aliases": [ 00:09:13.255 "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7" 00:09:13.255 ], 00:09:13.255 "product_name": "Malloc disk", 00:09:13.255 "block_size": 512, 00:09:13.255 "num_blocks": 65536, 00:09:13.255 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:13.255 "assigned_rate_limits": { 00:09:13.255 "rw_ios_per_sec": 0, 00:09:13.255 "rw_mbytes_per_sec": 0, 00:09:13.255 "r_mbytes_per_sec": 0, 00:09:13.255 "w_mbytes_per_sec": 0 00:09:13.255 }, 00:09:13.255 "claimed": false, 00:09:13.255 "zoned": false, 00:09:13.255 "supported_io_types": { 00:09:13.255 "read": true, 00:09:13.255 "write": true, 00:09:13.255 "unmap": true, 00:09:13.255 "flush": true, 00:09:13.255 "reset": true, 00:09:13.255 "nvme_admin": false, 00:09:13.255 "nvme_io": false, 00:09:13.255 "nvme_io_md": false, 00:09:13.255 "write_zeroes": true, 00:09:13.255 "zcopy": true, 00:09:13.255 "get_zone_info": false, 00:09:13.255 "zone_management": false, 00:09:13.255 "zone_append": false, 00:09:13.255 "compare": false, 00:09:13.255 "compare_and_write": false, 00:09:13.256 "abort": true, 00:09:13.256 "seek_hole": false, 00:09:13.256 "seek_data": false, 00:09:13.256 "copy": true, 00:09:13.256 "nvme_iov_md": false 00:09:13.256 }, 00:09:13.256 "memory_domains": [ 00:09:13.256 { 00:09:13.256 "dma_device_id": "system", 00:09:13.256 "dma_device_type": 1 00:09:13.256 }, 00:09:13.256 { 00:09:13.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.256 "dma_device_type": 2 00:09:13.256 } 00:09:13.256 ], 00:09:13.256 "driver_specific": {} 00:09:13.256 } 00:09:13.256 ] 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.256 BaseBdev3 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.256 [ 00:09:13.256 { 00:09:13.256 "name": "BaseBdev3", 00:09:13.256 "aliases": [ 00:09:13.256 "686c6d9b-f559-49d4-9e6f-3d9ccfc78870" 00:09:13.256 ], 00:09:13.256 "product_name": "Malloc disk", 00:09:13.256 "block_size": 512, 00:09:13.256 "num_blocks": 65536, 00:09:13.256 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:13.256 "assigned_rate_limits": { 00:09:13.256 "rw_ios_per_sec": 0, 00:09:13.256 "rw_mbytes_per_sec": 0, 00:09:13.256 "r_mbytes_per_sec": 0, 00:09:13.256 "w_mbytes_per_sec": 0 00:09:13.256 }, 00:09:13.256 "claimed": false, 00:09:13.256 "zoned": false, 00:09:13.256 "supported_io_types": { 00:09:13.256 "read": true, 00:09:13.256 "write": true, 00:09:13.256 "unmap": true, 00:09:13.256 "flush": true, 00:09:13.256 "reset": true, 00:09:13.256 "nvme_admin": false, 00:09:13.256 "nvme_io": false, 00:09:13.256 "nvme_io_md": false, 00:09:13.256 "write_zeroes": true, 00:09:13.256 "zcopy": true, 00:09:13.256 "get_zone_info": false, 00:09:13.256 "zone_management": false, 00:09:13.256 "zone_append": false, 00:09:13.256 "compare": false, 00:09:13.256 "compare_and_write": false, 00:09:13.256 "abort": true, 00:09:13.256 "seek_hole": false, 00:09:13.256 "seek_data": false, 00:09:13.256 "copy": true, 00:09:13.256 "nvme_iov_md": false 00:09:13.256 }, 00:09:13.256 "memory_domains": [ 00:09:13.256 { 00:09:13.256 "dma_device_id": "system", 00:09:13.256 "dma_device_type": 1 00:09:13.256 }, 00:09:13.256 { 00:09:13.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:13.256 "dma_device_type": 2 00:09:13.256 } 00:09:13.256 ], 00:09:13.256 "driver_specific": {} 00:09:13.256 } 00:09:13.256 ] 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.256 [2024-11-16 18:49:56.694699] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.256 [2024-11-16 18:49:56.694760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.256 [2024-11-16 18:49:56.694781] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.256 [2024-11-16 18:49:56.696879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.256 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.516 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.516 "name": "Existed_Raid", 00:09:13.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.516 "strip_size_kb": 0, 00:09:13.516 "state": "configuring", 00:09:13.516 "raid_level": "raid1", 00:09:13.516 "superblock": false, 00:09:13.516 "num_base_bdevs": 3, 00:09:13.516 "num_base_bdevs_discovered": 2, 00:09:13.516 "num_base_bdevs_operational": 3, 00:09:13.516 "base_bdevs_list": [ 00:09:13.516 { 00:09:13.516 "name": "BaseBdev1", 00:09:13.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.516 "is_configured": false, 00:09:13.516 "data_offset": 0, 00:09:13.516 "data_size": 0 00:09:13.516 }, 00:09:13.516 { 00:09:13.516 "name": "BaseBdev2", 00:09:13.516 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:13.516 "is_configured": true, 00:09:13.516 "data_offset": 0, 00:09:13.516 "data_size": 65536 00:09:13.516 }, 00:09:13.516 { 00:09:13.516 "name": "BaseBdev3", 00:09:13.516 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:13.516 "is_configured": true, 00:09:13.516 "data_offset": 0, 00:09:13.516 "data_size": 65536 00:09:13.516 } 00:09:13.516 ] 00:09:13.516 }' 00:09:13.516 18:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.516 18:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.776 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:13.776 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.776 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.776 [2024-11-16 18:49:57.094059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:13.776 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.776 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:13.776 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.777 "name": "Existed_Raid", 00:09:13.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.777 "strip_size_kb": 0, 00:09:13.777 "state": "configuring", 00:09:13.777 "raid_level": "raid1", 00:09:13.777 "superblock": false, 00:09:13.777 "num_base_bdevs": 3, 00:09:13.777 "num_base_bdevs_discovered": 1, 00:09:13.777 "num_base_bdevs_operational": 3, 00:09:13.777 "base_bdevs_list": [ 00:09:13.777 { 00:09:13.777 "name": "BaseBdev1", 00:09:13.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.777 "is_configured": false, 00:09:13.777 "data_offset": 0, 00:09:13.777 "data_size": 0 00:09:13.777 }, 00:09:13.777 { 00:09:13.777 "name": null, 00:09:13.777 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:13.777 "is_configured": false, 00:09:13.777 "data_offset": 0, 00:09:13.777 "data_size": 65536 00:09:13.777 }, 00:09:13.777 { 00:09:13.777 "name": "BaseBdev3", 00:09:13.777 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:13.777 "is_configured": true, 00:09:13.777 "data_offset": 0, 00:09:13.777 "data_size": 65536 00:09:13.777 } 00:09:13.777 ] 00:09:13.777 }' 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.777 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.037 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.037 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.037 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:14.037 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.037 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 [2024-11-16 18:49:57.572189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.299 BaseBdev1 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 [ 00:09:14.299 { 00:09:14.299 "name": "BaseBdev1", 00:09:14.299 "aliases": [ 00:09:14.299 "89cd2843-432e-441f-a227-033c9ba6d1ad" 00:09:14.299 ], 00:09:14.299 "product_name": "Malloc disk", 00:09:14.299 "block_size": 512, 00:09:14.299 "num_blocks": 65536, 00:09:14.299 "uuid": "89cd2843-432e-441f-a227-033c9ba6d1ad", 00:09:14.299 "assigned_rate_limits": { 00:09:14.299 "rw_ios_per_sec": 0, 00:09:14.299 "rw_mbytes_per_sec": 0, 00:09:14.299 "r_mbytes_per_sec": 0, 00:09:14.299 "w_mbytes_per_sec": 0 00:09:14.299 }, 00:09:14.299 "claimed": true, 00:09:14.299 "claim_type": "exclusive_write", 00:09:14.299 "zoned": false, 00:09:14.299 "supported_io_types": { 00:09:14.299 "read": true, 00:09:14.299 "write": true, 00:09:14.299 "unmap": true, 00:09:14.299 "flush": true, 00:09:14.299 "reset": true, 00:09:14.299 "nvme_admin": false, 00:09:14.299 "nvme_io": false, 00:09:14.299 "nvme_io_md": false, 00:09:14.299 "write_zeroes": true, 00:09:14.299 "zcopy": true, 00:09:14.299 "get_zone_info": false, 00:09:14.299 "zone_management": false, 00:09:14.299 "zone_append": false, 00:09:14.299 "compare": false, 00:09:14.299 "compare_and_write": false, 00:09:14.299 "abort": true, 00:09:14.299 "seek_hole": false, 00:09:14.299 "seek_data": false, 00:09:14.299 "copy": true, 00:09:14.299 "nvme_iov_md": false 00:09:14.299 }, 00:09:14.299 "memory_domains": [ 00:09:14.299 { 00:09:14.299 "dma_device_id": "system", 00:09:14.299 "dma_device_type": 1 00:09:14.299 }, 00:09:14.299 { 00:09:14.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.299 "dma_device_type": 2 00:09:14.299 } 00:09:14.299 ], 00:09:14.299 "driver_specific": {} 00:09:14.299 } 00:09:14.299 ] 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.299 "name": "Existed_Raid", 00:09:14.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.299 "strip_size_kb": 0, 00:09:14.299 "state": "configuring", 00:09:14.299 "raid_level": "raid1", 00:09:14.299 "superblock": false, 00:09:14.299 "num_base_bdevs": 3, 00:09:14.299 "num_base_bdevs_discovered": 2, 00:09:14.299 "num_base_bdevs_operational": 3, 00:09:14.299 "base_bdevs_list": [ 00:09:14.299 { 00:09:14.299 "name": "BaseBdev1", 00:09:14.299 "uuid": "89cd2843-432e-441f-a227-033c9ba6d1ad", 00:09:14.299 "is_configured": true, 00:09:14.299 "data_offset": 0, 00:09:14.299 "data_size": 65536 00:09:14.299 }, 00:09:14.299 { 00:09:14.299 "name": null, 00:09:14.299 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:14.299 "is_configured": false, 00:09:14.299 "data_offset": 0, 00:09:14.299 "data_size": 65536 00:09:14.299 }, 00:09:14.299 { 00:09:14.299 "name": "BaseBdev3", 00:09:14.299 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:14.299 "is_configured": true, 00:09:14.299 "data_offset": 0, 00:09:14.299 "data_size": 65536 00:09:14.299 } 00:09:14.299 ] 00:09:14.299 }' 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.299 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.559 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.559 18:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.559 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.559 18:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.559 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.559 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:14.559 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:14.559 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.559 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.819 [2024-11-16 18:49:58.031509] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.819 "name": "Existed_Raid", 00:09:14.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.819 "strip_size_kb": 0, 00:09:14.819 "state": "configuring", 00:09:14.819 "raid_level": "raid1", 00:09:14.819 "superblock": false, 00:09:14.819 "num_base_bdevs": 3, 00:09:14.819 "num_base_bdevs_discovered": 1, 00:09:14.819 "num_base_bdevs_operational": 3, 00:09:14.819 "base_bdevs_list": [ 00:09:14.819 { 00:09:14.819 "name": "BaseBdev1", 00:09:14.819 "uuid": "89cd2843-432e-441f-a227-033c9ba6d1ad", 00:09:14.819 "is_configured": true, 00:09:14.819 "data_offset": 0, 00:09:14.819 "data_size": 65536 00:09:14.819 }, 00:09:14.819 { 00:09:14.819 "name": null, 00:09:14.819 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:14.819 "is_configured": false, 00:09:14.819 "data_offset": 0, 00:09:14.819 "data_size": 65536 00:09:14.819 }, 00:09:14.819 { 00:09:14.819 "name": null, 00:09:14.819 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:14.819 "is_configured": false, 00:09:14.819 "data_offset": 0, 00:09:14.819 "data_size": 65536 00:09:14.819 } 00:09:14.819 ] 00:09:14.819 }' 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.819 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.079 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.079 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:15.079 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.079 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.079 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.079 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:15.079 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:15.079 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.079 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.079 [2024-11-16 18:49:58.546714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.339 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.340 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.340 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.340 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.340 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.340 "name": "Existed_Raid", 00:09:15.340 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.340 "strip_size_kb": 0, 00:09:15.340 "state": "configuring", 00:09:15.340 "raid_level": "raid1", 00:09:15.340 "superblock": false, 00:09:15.340 "num_base_bdevs": 3, 00:09:15.340 "num_base_bdevs_discovered": 2, 00:09:15.340 "num_base_bdevs_operational": 3, 00:09:15.340 "base_bdevs_list": [ 00:09:15.340 { 00:09:15.340 "name": "BaseBdev1", 00:09:15.340 "uuid": "89cd2843-432e-441f-a227-033c9ba6d1ad", 00:09:15.340 "is_configured": true, 00:09:15.340 "data_offset": 0, 00:09:15.340 "data_size": 65536 00:09:15.340 }, 00:09:15.340 { 00:09:15.340 "name": null, 00:09:15.340 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:15.340 "is_configured": false, 00:09:15.340 "data_offset": 0, 00:09:15.340 "data_size": 65536 00:09:15.340 }, 00:09:15.340 { 00:09:15.340 "name": "BaseBdev3", 00:09:15.340 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:15.340 "is_configured": true, 00:09:15.340 "data_offset": 0, 00:09:15.340 "data_size": 65536 00:09:15.340 } 00:09:15.340 ] 00:09:15.340 }' 00:09:15.340 18:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.340 18:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.599 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:15.599 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.599 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.599 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.599 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.599 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:15.599 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:15.599 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.599 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.599 [2024-11-16 18:49:59.065805] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.860 "name": "Existed_Raid", 00:09:15.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.860 "strip_size_kb": 0, 00:09:15.860 "state": "configuring", 00:09:15.860 "raid_level": "raid1", 00:09:15.860 "superblock": false, 00:09:15.860 "num_base_bdevs": 3, 00:09:15.860 "num_base_bdevs_discovered": 1, 00:09:15.860 "num_base_bdevs_operational": 3, 00:09:15.860 "base_bdevs_list": [ 00:09:15.860 { 00:09:15.860 "name": null, 00:09:15.860 "uuid": "89cd2843-432e-441f-a227-033c9ba6d1ad", 00:09:15.860 "is_configured": false, 00:09:15.860 "data_offset": 0, 00:09:15.860 "data_size": 65536 00:09:15.860 }, 00:09:15.860 { 00:09:15.860 "name": null, 00:09:15.860 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:15.860 "is_configured": false, 00:09:15.860 "data_offset": 0, 00:09:15.860 "data_size": 65536 00:09:15.860 }, 00:09:15.860 { 00:09:15.860 "name": "BaseBdev3", 00:09:15.860 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:15.860 "is_configured": true, 00:09:15.860 "data_offset": 0, 00:09:15.860 "data_size": 65536 00:09:15.860 } 00:09:15.860 ] 00:09:15.860 }' 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.860 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.430 [2024-11-16 18:49:59.669337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.430 "name": "Existed_Raid", 00:09:16.430 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.430 "strip_size_kb": 0, 00:09:16.430 "state": "configuring", 00:09:16.430 "raid_level": "raid1", 00:09:16.430 "superblock": false, 00:09:16.430 "num_base_bdevs": 3, 00:09:16.430 "num_base_bdevs_discovered": 2, 00:09:16.430 "num_base_bdevs_operational": 3, 00:09:16.430 "base_bdevs_list": [ 00:09:16.430 { 00:09:16.430 "name": null, 00:09:16.430 "uuid": "89cd2843-432e-441f-a227-033c9ba6d1ad", 00:09:16.430 "is_configured": false, 00:09:16.430 "data_offset": 0, 00:09:16.430 "data_size": 65536 00:09:16.430 }, 00:09:16.430 { 00:09:16.430 "name": "BaseBdev2", 00:09:16.430 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:16.430 "is_configured": true, 00:09:16.430 "data_offset": 0, 00:09:16.430 "data_size": 65536 00:09:16.430 }, 00:09:16.430 { 00:09:16.430 "name": "BaseBdev3", 00:09:16.430 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:16.430 "is_configured": true, 00:09:16.430 "data_offset": 0, 00:09:16.430 "data_size": 65536 00:09:16.430 } 00:09:16.430 ] 00:09:16.430 }' 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.430 18:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.691 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.691 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.691 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.691 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 89cd2843-432e-441f-a227-033c9ba6d1ad 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.951 [2024-11-16 18:50:00.283295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:16.951 [2024-11-16 18:50:00.283344] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:16.951 [2024-11-16 18:50:00.283351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:16.951 [2024-11-16 18:50:00.283579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:16.951 [2024-11-16 18:50:00.283791] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:16.951 [2024-11-16 18:50:00.283806] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:16.951 [2024-11-16 18:50:00.284067] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.951 NewBaseBdev 00:09:16.951 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.952 [ 00:09:16.952 { 00:09:16.952 "name": "NewBaseBdev", 00:09:16.952 "aliases": [ 00:09:16.952 "89cd2843-432e-441f-a227-033c9ba6d1ad" 00:09:16.952 ], 00:09:16.952 "product_name": "Malloc disk", 00:09:16.952 "block_size": 512, 00:09:16.952 "num_blocks": 65536, 00:09:16.952 "uuid": "89cd2843-432e-441f-a227-033c9ba6d1ad", 00:09:16.952 "assigned_rate_limits": { 00:09:16.952 "rw_ios_per_sec": 0, 00:09:16.952 "rw_mbytes_per_sec": 0, 00:09:16.952 "r_mbytes_per_sec": 0, 00:09:16.952 "w_mbytes_per_sec": 0 00:09:16.952 }, 00:09:16.952 "claimed": true, 00:09:16.952 "claim_type": "exclusive_write", 00:09:16.952 "zoned": false, 00:09:16.952 "supported_io_types": { 00:09:16.952 "read": true, 00:09:16.952 "write": true, 00:09:16.952 "unmap": true, 00:09:16.952 "flush": true, 00:09:16.952 "reset": true, 00:09:16.952 "nvme_admin": false, 00:09:16.952 "nvme_io": false, 00:09:16.952 "nvme_io_md": false, 00:09:16.952 "write_zeroes": true, 00:09:16.952 "zcopy": true, 00:09:16.952 "get_zone_info": false, 00:09:16.952 "zone_management": false, 00:09:16.952 "zone_append": false, 00:09:16.952 "compare": false, 00:09:16.952 "compare_and_write": false, 00:09:16.952 "abort": true, 00:09:16.952 "seek_hole": false, 00:09:16.952 "seek_data": false, 00:09:16.952 "copy": true, 00:09:16.952 "nvme_iov_md": false 00:09:16.952 }, 00:09:16.952 "memory_domains": [ 00:09:16.952 { 00:09:16.952 "dma_device_id": "system", 00:09:16.952 "dma_device_type": 1 00:09:16.952 }, 00:09:16.952 { 00:09:16.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:16.952 "dma_device_type": 2 00:09:16.952 } 00:09:16.952 ], 00:09:16.952 "driver_specific": {} 00:09:16.952 } 00:09:16.952 ] 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:16.952 "name": "Existed_Raid", 00:09:16.952 "uuid": "83a8407f-bad3-4cb6-8a44-a46c0488d39a", 00:09:16.952 "strip_size_kb": 0, 00:09:16.952 "state": "online", 00:09:16.952 "raid_level": "raid1", 00:09:16.952 "superblock": false, 00:09:16.952 "num_base_bdevs": 3, 00:09:16.952 "num_base_bdevs_discovered": 3, 00:09:16.952 "num_base_bdevs_operational": 3, 00:09:16.952 "base_bdevs_list": [ 00:09:16.952 { 00:09:16.952 "name": "NewBaseBdev", 00:09:16.952 "uuid": "89cd2843-432e-441f-a227-033c9ba6d1ad", 00:09:16.952 "is_configured": true, 00:09:16.952 "data_offset": 0, 00:09:16.952 "data_size": 65536 00:09:16.952 }, 00:09:16.952 { 00:09:16.952 "name": "BaseBdev2", 00:09:16.952 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:16.952 "is_configured": true, 00:09:16.952 "data_offset": 0, 00:09:16.952 "data_size": 65536 00:09:16.952 }, 00:09:16.952 { 00:09:16.952 "name": "BaseBdev3", 00:09:16.952 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:16.952 "is_configured": true, 00:09:16.952 "data_offset": 0, 00:09:16.952 "data_size": 65536 00:09:16.952 } 00:09:16.952 ] 00:09:16.952 }' 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:16.952 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.522 [2024-11-16 18:50:00.718852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:17.522 "name": "Existed_Raid", 00:09:17.522 "aliases": [ 00:09:17.522 "83a8407f-bad3-4cb6-8a44-a46c0488d39a" 00:09:17.522 ], 00:09:17.522 "product_name": "Raid Volume", 00:09:17.522 "block_size": 512, 00:09:17.522 "num_blocks": 65536, 00:09:17.522 "uuid": "83a8407f-bad3-4cb6-8a44-a46c0488d39a", 00:09:17.522 "assigned_rate_limits": { 00:09:17.522 "rw_ios_per_sec": 0, 00:09:17.522 "rw_mbytes_per_sec": 0, 00:09:17.522 "r_mbytes_per_sec": 0, 00:09:17.522 "w_mbytes_per_sec": 0 00:09:17.522 }, 00:09:17.522 "claimed": false, 00:09:17.522 "zoned": false, 00:09:17.522 "supported_io_types": { 00:09:17.522 "read": true, 00:09:17.522 "write": true, 00:09:17.522 "unmap": false, 00:09:17.522 "flush": false, 00:09:17.522 "reset": true, 00:09:17.522 "nvme_admin": false, 00:09:17.522 "nvme_io": false, 00:09:17.522 "nvme_io_md": false, 00:09:17.522 "write_zeroes": true, 00:09:17.522 "zcopy": false, 00:09:17.522 "get_zone_info": false, 00:09:17.522 "zone_management": false, 00:09:17.522 "zone_append": false, 00:09:17.522 "compare": false, 00:09:17.522 "compare_and_write": false, 00:09:17.522 "abort": false, 00:09:17.522 "seek_hole": false, 00:09:17.522 "seek_data": false, 00:09:17.522 "copy": false, 00:09:17.522 "nvme_iov_md": false 00:09:17.522 }, 00:09:17.522 "memory_domains": [ 00:09:17.522 { 00:09:17.522 "dma_device_id": "system", 00:09:17.522 "dma_device_type": 1 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.522 "dma_device_type": 2 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "dma_device_id": "system", 00:09:17.522 "dma_device_type": 1 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.522 "dma_device_type": 2 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "dma_device_id": "system", 00:09:17.522 "dma_device_type": 1 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.522 "dma_device_type": 2 00:09:17.522 } 00:09:17.522 ], 00:09:17.522 "driver_specific": { 00:09:17.522 "raid": { 00:09:17.522 "uuid": "83a8407f-bad3-4cb6-8a44-a46c0488d39a", 00:09:17.522 "strip_size_kb": 0, 00:09:17.522 "state": "online", 00:09:17.522 "raid_level": "raid1", 00:09:17.522 "superblock": false, 00:09:17.522 "num_base_bdevs": 3, 00:09:17.522 "num_base_bdevs_discovered": 3, 00:09:17.522 "num_base_bdevs_operational": 3, 00:09:17.522 "base_bdevs_list": [ 00:09:17.522 { 00:09:17.522 "name": "NewBaseBdev", 00:09:17.522 "uuid": "89cd2843-432e-441f-a227-033c9ba6d1ad", 00:09:17.522 "is_configured": true, 00:09:17.522 "data_offset": 0, 00:09:17.522 "data_size": 65536 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "name": "BaseBdev2", 00:09:17.522 "uuid": "7c08e0fa-51e5-4d2c-b63a-1e1550455cf7", 00:09:17.522 "is_configured": true, 00:09:17.522 "data_offset": 0, 00:09:17.522 "data_size": 65536 00:09:17.522 }, 00:09:17.522 { 00:09:17.522 "name": "BaseBdev3", 00:09:17.522 "uuid": "686c6d9b-f559-49d4-9e6f-3d9ccfc78870", 00:09:17.522 "is_configured": true, 00:09:17.522 "data_offset": 0, 00:09:17.522 "data_size": 65536 00:09:17.522 } 00:09:17.522 ] 00:09:17.522 } 00:09:17.522 } 00:09:17.522 }' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:17.522 BaseBdev2 00:09:17.522 BaseBdev3' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.522 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.522 [2024-11-16 18:50:00.986102] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:17.522 [2024-11-16 18:50:00.986180] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:17.522 [2024-11-16 18:50:00.986274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:17.522 [2024-11-16 18:50:00.986636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:17.522 [2024-11-16 18:50:00.986716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:17.523 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.523 18:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67204 00:09:17.523 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67204 ']' 00:09:17.523 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67204 00:09:17.782 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:17.782 18:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:17.782 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67204 00:09:17.782 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:17.782 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:17.782 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67204' 00:09:17.782 killing process with pid 67204 00:09:17.782 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67204 00:09:17.782 [2024-11-16 18:50:01.030947] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:17.782 18:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67204 00:09:18.042 [2024-11-16 18:50:01.314546] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:18.981 00:09:18.981 real 0m10.349s 00:09:18.981 user 0m16.360s 00:09:18.981 sys 0m1.924s 00:09:18.981 ************************************ 00:09:18.981 END TEST raid_state_function_test 00:09:18.981 ************************************ 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.981 18:50:02 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:09:18.981 18:50:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:18.981 18:50:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.981 18:50:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:18.981 ************************************ 00:09:18.981 START TEST raid_state_function_test_sb 00:09:18.981 ************************************ 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:18.981 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67826 00:09:19.241 Process raid pid: 67826 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67826' 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67826 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67826 ']' 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:19.241 18:50:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.241 [2024-11-16 18:50:02.538166] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:19.241 [2024-11-16 18:50:02.538376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:19.241 [2024-11-16 18:50:02.712914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.501 [2024-11-16 18:50:02.823372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.761 [2024-11-16 18:50:03.021766] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:19.761 [2024-11-16 18:50:03.021860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.020 [2024-11-16 18:50:03.387403] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.020 [2024-11-16 18:50:03.387531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.020 [2024-11-16 18:50:03.387546] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.020 [2024-11-16 18:50:03.387555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.020 [2024-11-16 18:50:03.387562] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.020 [2024-11-16 18:50:03.387570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.020 "name": "Existed_Raid", 00:09:20.020 "uuid": "41730ef7-2984-4b77-9224-157edf51bdaa", 00:09:20.020 "strip_size_kb": 0, 00:09:20.020 "state": "configuring", 00:09:20.020 "raid_level": "raid1", 00:09:20.020 "superblock": true, 00:09:20.020 "num_base_bdevs": 3, 00:09:20.020 "num_base_bdevs_discovered": 0, 00:09:20.020 "num_base_bdevs_operational": 3, 00:09:20.020 "base_bdevs_list": [ 00:09:20.020 { 00:09:20.020 "name": "BaseBdev1", 00:09:20.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.020 "is_configured": false, 00:09:20.020 "data_offset": 0, 00:09:20.020 "data_size": 0 00:09:20.020 }, 00:09:20.020 { 00:09:20.020 "name": "BaseBdev2", 00:09:20.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.020 "is_configured": false, 00:09:20.020 "data_offset": 0, 00:09:20.020 "data_size": 0 00:09:20.020 }, 00:09:20.020 { 00:09:20.020 "name": "BaseBdev3", 00:09:20.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.020 "is_configured": false, 00:09:20.020 "data_offset": 0, 00:09:20.020 "data_size": 0 00:09:20.020 } 00:09:20.020 ] 00:09:20.020 }' 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.020 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 [2024-11-16 18:50:03.814635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.591 [2024-11-16 18:50:03.814746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 [2024-11-16 18:50:03.826604] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:20.591 [2024-11-16 18:50:03.826719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:20.591 [2024-11-16 18:50:03.826753] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.591 [2024-11-16 18:50:03.826778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.591 [2024-11-16 18:50:03.826797] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.591 [2024-11-16 18:50:03.826818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 [2024-11-16 18:50:03.874510] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.591 BaseBdev1 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.591 [ 00:09:20.591 { 00:09:20.591 "name": "BaseBdev1", 00:09:20.591 "aliases": [ 00:09:20.591 "9e740063-8d69-4540-969c-5ecd07c3f913" 00:09:20.591 ], 00:09:20.591 "product_name": "Malloc disk", 00:09:20.591 "block_size": 512, 00:09:20.591 "num_blocks": 65536, 00:09:20.591 "uuid": "9e740063-8d69-4540-969c-5ecd07c3f913", 00:09:20.591 "assigned_rate_limits": { 00:09:20.591 "rw_ios_per_sec": 0, 00:09:20.591 "rw_mbytes_per_sec": 0, 00:09:20.591 "r_mbytes_per_sec": 0, 00:09:20.591 "w_mbytes_per_sec": 0 00:09:20.591 }, 00:09:20.591 "claimed": true, 00:09:20.591 "claim_type": "exclusive_write", 00:09:20.591 "zoned": false, 00:09:20.591 "supported_io_types": { 00:09:20.591 "read": true, 00:09:20.591 "write": true, 00:09:20.591 "unmap": true, 00:09:20.591 "flush": true, 00:09:20.591 "reset": true, 00:09:20.591 "nvme_admin": false, 00:09:20.591 "nvme_io": false, 00:09:20.591 "nvme_io_md": false, 00:09:20.591 "write_zeroes": true, 00:09:20.591 "zcopy": true, 00:09:20.591 "get_zone_info": false, 00:09:20.591 "zone_management": false, 00:09:20.591 "zone_append": false, 00:09:20.591 "compare": false, 00:09:20.591 "compare_and_write": false, 00:09:20.591 "abort": true, 00:09:20.591 "seek_hole": false, 00:09:20.591 "seek_data": false, 00:09:20.591 "copy": true, 00:09:20.591 "nvme_iov_md": false 00:09:20.591 }, 00:09:20.591 "memory_domains": [ 00:09:20.591 { 00:09:20.591 "dma_device_id": "system", 00:09:20.591 "dma_device_type": 1 00:09:20.591 }, 00:09:20.591 { 00:09:20.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.591 "dma_device_type": 2 00:09:20.591 } 00:09:20.591 ], 00:09:20.591 "driver_specific": {} 00:09:20.591 } 00:09:20.591 ] 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.591 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.592 "name": "Existed_Raid", 00:09:20.592 "uuid": "25d9f465-9a30-4e9b-b241-3a7cb8802501", 00:09:20.592 "strip_size_kb": 0, 00:09:20.592 "state": "configuring", 00:09:20.592 "raid_level": "raid1", 00:09:20.592 "superblock": true, 00:09:20.592 "num_base_bdevs": 3, 00:09:20.592 "num_base_bdevs_discovered": 1, 00:09:20.592 "num_base_bdevs_operational": 3, 00:09:20.592 "base_bdevs_list": [ 00:09:20.592 { 00:09:20.592 "name": "BaseBdev1", 00:09:20.592 "uuid": "9e740063-8d69-4540-969c-5ecd07c3f913", 00:09:20.592 "is_configured": true, 00:09:20.592 "data_offset": 2048, 00:09:20.592 "data_size": 63488 00:09:20.592 }, 00:09:20.592 { 00:09:20.592 "name": "BaseBdev2", 00:09:20.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.592 "is_configured": false, 00:09:20.592 "data_offset": 0, 00:09:20.592 "data_size": 0 00:09:20.592 }, 00:09:20.592 { 00:09:20.592 "name": "BaseBdev3", 00:09:20.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.592 "is_configured": false, 00:09:20.592 "data_offset": 0, 00:09:20.592 "data_size": 0 00:09:20.592 } 00:09:20.592 ] 00:09:20.592 }' 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.592 18:50:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.851 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:20.851 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.851 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.851 [2024-11-16 18:50:04.289828] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:20.851 [2024-11-16 18:50:04.289930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:20.851 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.851 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:20.851 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.851 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.851 [2024-11-16 18:50:04.297860] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:20.851 [2024-11-16 18:50:04.299637] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:20.851 [2024-11-16 18:50:04.299685] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:20.851 [2024-11-16 18:50:04.299696] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:20.851 [2024-11-16 18:50:04.299704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:20.851 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.852 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.111 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.111 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.111 "name": "Existed_Raid", 00:09:21.111 "uuid": "1ed54159-76c0-475d-9fc6-29597c695968", 00:09:21.111 "strip_size_kb": 0, 00:09:21.111 "state": "configuring", 00:09:21.111 "raid_level": "raid1", 00:09:21.111 "superblock": true, 00:09:21.111 "num_base_bdevs": 3, 00:09:21.111 "num_base_bdevs_discovered": 1, 00:09:21.111 "num_base_bdevs_operational": 3, 00:09:21.111 "base_bdevs_list": [ 00:09:21.111 { 00:09:21.111 "name": "BaseBdev1", 00:09:21.111 "uuid": "9e740063-8d69-4540-969c-5ecd07c3f913", 00:09:21.111 "is_configured": true, 00:09:21.111 "data_offset": 2048, 00:09:21.111 "data_size": 63488 00:09:21.111 }, 00:09:21.111 { 00:09:21.111 "name": "BaseBdev2", 00:09:21.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.111 "is_configured": false, 00:09:21.111 "data_offset": 0, 00:09:21.111 "data_size": 0 00:09:21.111 }, 00:09:21.111 { 00:09:21.111 "name": "BaseBdev3", 00:09:21.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.111 "is_configured": false, 00:09:21.111 "data_offset": 0, 00:09:21.111 "data_size": 0 00:09:21.111 } 00:09:21.111 ] 00:09:21.111 }' 00:09:21.111 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.111 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.372 [2024-11-16 18:50:04.707222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.372 BaseBdev2 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.372 [ 00:09:21.372 { 00:09:21.372 "name": "BaseBdev2", 00:09:21.372 "aliases": [ 00:09:21.372 "9014eb9a-5413-427d-a5b3-4462d13355ac" 00:09:21.372 ], 00:09:21.372 "product_name": "Malloc disk", 00:09:21.372 "block_size": 512, 00:09:21.372 "num_blocks": 65536, 00:09:21.372 "uuid": "9014eb9a-5413-427d-a5b3-4462d13355ac", 00:09:21.372 "assigned_rate_limits": { 00:09:21.372 "rw_ios_per_sec": 0, 00:09:21.372 "rw_mbytes_per_sec": 0, 00:09:21.372 "r_mbytes_per_sec": 0, 00:09:21.372 "w_mbytes_per_sec": 0 00:09:21.372 }, 00:09:21.372 "claimed": true, 00:09:21.372 "claim_type": "exclusive_write", 00:09:21.372 "zoned": false, 00:09:21.372 "supported_io_types": { 00:09:21.372 "read": true, 00:09:21.372 "write": true, 00:09:21.372 "unmap": true, 00:09:21.372 "flush": true, 00:09:21.372 "reset": true, 00:09:21.372 "nvme_admin": false, 00:09:21.372 "nvme_io": false, 00:09:21.372 "nvme_io_md": false, 00:09:21.372 "write_zeroes": true, 00:09:21.372 "zcopy": true, 00:09:21.372 "get_zone_info": false, 00:09:21.372 "zone_management": false, 00:09:21.372 "zone_append": false, 00:09:21.372 "compare": false, 00:09:21.372 "compare_and_write": false, 00:09:21.372 "abort": true, 00:09:21.372 "seek_hole": false, 00:09:21.372 "seek_data": false, 00:09:21.372 "copy": true, 00:09:21.372 "nvme_iov_md": false 00:09:21.372 }, 00:09:21.372 "memory_domains": [ 00:09:21.372 { 00:09:21.372 "dma_device_id": "system", 00:09:21.372 "dma_device_type": 1 00:09:21.372 }, 00:09:21.372 { 00:09:21.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.372 "dma_device_type": 2 00:09:21.372 } 00:09:21.372 ], 00:09:21.372 "driver_specific": {} 00:09:21.372 } 00:09:21.372 ] 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.372 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.373 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.373 "name": "Existed_Raid", 00:09:21.373 "uuid": "1ed54159-76c0-475d-9fc6-29597c695968", 00:09:21.373 "strip_size_kb": 0, 00:09:21.373 "state": "configuring", 00:09:21.373 "raid_level": "raid1", 00:09:21.373 "superblock": true, 00:09:21.373 "num_base_bdevs": 3, 00:09:21.373 "num_base_bdevs_discovered": 2, 00:09:21.373 "num_base_bdevs_operational": 3, 00:09:21.373 "base_bdevs_list": [ 00:09:21.373 { 00:09:21.373 "name": "BaseBdev1", 00:09:21.373 "uuid": "9e740063-8d69-4540-969c-5ecd07c3f913", 00:09:21.373 "is_configured": true, 00:09:21.373 "data_offset": 2048, 00:09:21.373 "data_size": 63488 00:09:21.373 }, 00:09:21.373 { 00:09:21.373 "name": "BaseBdev2", 00:09:21.373 "uuid": "9014eb9a-5413-427d-a5b3-4462d13355ac", 00:09:21.373 "is_configured": true, 00:09:21.373 "data_offset": 2048, 00:09:21.373 "data_size": 63488 00:09:21.373 }, 00:09:21.373 { 00:09:21.373 "name": "BaseBdev3", 00:09:21.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.373 "is_configured": false, 00:09:21.373 "data_offset": 0, 00:09:21.373 "data_size": 0 00:09:21.373 } 00:09:21.373 ] 00:09:21.373 }' 00:09:21.373 18:50:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.373 18:50:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.956 [2024-11-16 18:50:05.272678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.956 [2024-11-16 18:50:05.272943] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:21.956 [2024-11-16 18:50:05.272967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:21.956 BaseBdev3 00:09:21.956 [2024-11-16 18:50:05.273425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:21.956 [2024-11-16 18:50:05.273585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:21.956 [2024-11-16 18:50:05.273595] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:21.956 [2024-11-16 18:50:05.273754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.956 [ 00:09:21.956 { 00:09:21.956 "name": "BaseBdev3", 00:09:21.956 "aliases": [ 00:09:21.956 "9e69ba4a-5574-4f6c-9864-46a752e90663" 00:09:21.956 ], 00:09:21.956 "product_name": "Malloc disk", 00:09:21.956 "block_size": 512, 00:09:21.956 "num_blocks": 65536, 00:09:21.956 "uuid": "9e69ba4a-5574-4f6c-9864-46a752e90663", 00:09:21.956 "assigned_rate_limits": { 00:09:21.956 "rw_ios_per_sec": 0, 00:09:21.956 "rw_mbytes_per_sec": 0, 00:09:21.956 "r_mbytes_per_sec": 0, 00:09:21.956 "w_mbytes_per_sec": 0 00:09:21.956 }, 00:09:21.956 "claimed": true, 00:09:21.956 "claim_type": "exclusive_write", 00:09:21.956 "zoned": false, 00:09:21.956 "supported_io_types": { 00:09:21.956 "read": true, 00:09:21.956 "write": true, 00:09:21.956 "unmap": true, 00:09:21.956 "flush": true, 00:09:21.956 "reset": true, 00:09:21.956 "nvme_admin": false, 00:09:21.956 "nvme_io": false, 00:09:21.956 "nvme_io_md": false, 00:09:21.956 "write_zeroes": true, 00:09:21.956 "zcopy": true, 00:09:21.956 "get_zone_info": false, 00:09:21.956 "zone_management": false, 00:09:21.956 "zone_append": false, 00:09:21.956 "compare": false, 00:09:21.956 "compare_and_write": false, 00:09:21.956 "abort": true, 00:09:21.956 "seek_hole": false, 00:09:21.956 "seek_data": false, 00:09:21.956 "copy": true, 00:09:21.956 "nvme_iov_md": false 00:09:21.956 }, 00:09:21.956 "memory_domains": [ 00:09:21.956 { 00:09:21.956 "dma_device_id": "system", 00:09:21.956 "dma_device_type": 1 00:09:21.956 }, 00:09:21.956 { 00:09:21.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.956 "dma_device_type": 2 00:09:21.956 } 00:09:21.956 ], 00:09:21.956 "driver_specific": {} 00:09:21.956 } 00:09:21.956 ] 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.956 "name": "Existed_Raid", 00:09:21.956 "uuid": "1ed54159-76c0-475d-9fc6-29597c695968", 00:09:21.956 "strip_size_kb": 0, 00:09:21.956 "state": "online", 00:09:21.956 "raid_level": "raid1", 00:09:21.956 "superblock": true, 00:09:21.956 "num_base_bdevs": 3, 00:09:21.956 "num_base_bdevs_discovered": 3, 00:09:21.956 "num_base_bdevs_operational": 3, 00:09:21.956 "base_bdevs_list": [ 00:09:21.956 { 00:09:21.956 "name": "BaseBdev1", 00:09:21.956 "uuid": "9e740063-8d69-4540-969c-5ecd07c3f913", 00:09:21.956 "is_configured": true, 00:09:21.956 "data_offset": 2048, 00:09:21.956 "data_size": 63488 00:09:21.956 }, 00:09:21.956 { 00:09:21.956 "name": "BaseBdev2", 00:09:21.956 "uuid": "9014eb9a-5413-427d-a5b3-4462d13355ac", 00:09:21.956 "is_configured": true, 00:09:21.956 "data_offset": 2048, 00:09:21.956 "data_size": 63488 00:09:21.956 }, 00:09:21.956 { 00:09:21.956 "name": "BaseBdev3", 00:09:21.956 "uuid": "9e69ba4a-5574-4f6c-9864-46a752e90663", 00:09:21.956 "is_configured": true, 00:09:21.956 "data_offset": 2048, 00:09:21.956 "data_size": 63488 00:09:21.956 } 00:09:21.956 ] 00:09:21.956 }' 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.956 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.527 [2024-11-16 18:50:05.724242] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:22.527 "name": "Existed_Raid", 00:09:22.527 "aliases": [ 00:09:22.527 "1ed54159-76c0-475d-9fc6-29597c695968" 00:09:22.527 ], 00:09:22.527 "product_name": "Raid Volume", 00:09:22.527 "block_size": 512, 00:09:22.527 "num_blocks": 63488, 00:09:22.527 "uuid": "1ed54159-76c0-475d-9fc6-29597c695968", 00:09:22.527 "assigned_rate_limits": { 00:09:22.527 "rw_ios_per_sec": 0, 00:09:22.527 "rw_mbytes_per_sec": 0, 00:09:22.527 "r_mbytes_per_sec": 0, 00:09:22.527 "w_mbytes_per_sec": 0 00:09:22.527 }, 00:09:22.527 "claimed": false, 00:09:22.527 "zoned": false, 00:09:22.527 "supported_io_types": { 00:09:22.527 "read": true, 00:09:22.527 "write": true, 00:09:22.527 "unmap": false, 00:09:22.527 "flush": false, 00:09:22.527 "reset": true, 00:09:22.527 "nvme_admin": false, 00:09:22.527 "nvme_io": false, 00:09:22.527 "nvme_io_md": false, 00:09:22.527 "write_zeroes": true, 00:09:22.527 "zcopy": false, 00:09:22.527 "get_zone_info": false, 00:09:22.527 "zone_management": false, 00:09:22.527 "zone_append": false, 00:09:22.527 "compare": false, 00:09:22.527 "compare_and_write": false, 00:09:22.527 "abort": false, 00:09:22.527 "seek_hole": false, 00:09:22.527 "seek_data": false, 00:09:22.527 "copy": false, 00:09:22.527 "nvme_iov_md": false 00:09:22.527 }, 00:09:22.527 "memory_domains": [ 00:09:22.527 { 00:09:22.527 "dma_device_id": "system", 00:09:22.527 "dma_device_type": 1 00:09:22.527 }, 00:09:22.527 { 00:09:22.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.527 "dma_device_type": 2 00:09:22.527 }, 00:09:22.527 { 00:09:22.527 "dma_device_id": "system", 00:09:22.527 "dma_device_type": 1 00:09:22.527 }, 00:09:22.527 { 00:09:22.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.527 "dma_device_type": 2 00:09:22.527 }, 00:09:22.527 { 00:09:22.527 "dma_device_id": "system", 00:09:22.527 "dma_device_type": 1 00:09:22.527 }, 00:09:22.527 { 00:09:22.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.527 "dma_device_type": 2 00:09:22.527 } 00:09:22.527 ], 00:09:22.527 "driver_specific": { 00:09:22.527 "raid": { 00:09:22.527 "uuid": "1ed54159-76c0-475d-9fc6-29597c695968", 00:09:22.527 "strip_size_kb": 0, 00:09:22.527 "state": "online", 00:09:22.527 "raid_level": "raid1", 00:09:22.527 "superblock": true, 00:09:22.527 "num_base_bdevs": 3, 00:09:22.527 "num_base_bdevs_discovered": 3, 00:09:22.527 "num_base_bdevs_operational": 3, 00:09:22.527 "base_bdevs_list": [ 00:09:22.527 { 00:09:22.527 "name": "BaseBdev1", 00:09:22.527 "uuid": "9e740063-8d69-4540-969c-5ecd07c3f913", 00:09:22.527 "is_configured": true, 00:09:22.527 "data_offset": 2048, 00:09:22.527 "data_size": 63488 00:09:22.527 }, 00:09:22.527 { 00:09:22.527 "name": "BaseBdev2", 00:09:22.527 "uuid": "9014eb9a-5413-427d-a5b3-4462d13355ac", 00:09:22.527 "is_configured": true, 00:09:22.527 "data_offset": 2048, 00:09:22.527 "data_size": 63488 00:09:22.527 }, 00:09:22.527 { 00:09:22.527 "name": "BaseBdev3", 00:09:22.527 "uuid": "9e69ba4a-5574-4f6c-9864-46a752e90663", 00:09:22.527 "is_configured": true, 00:09:22.527 "data_offset": 2048, 00:09:22.527 "data_size": 63488 00:09:22.527 } 00:09:22.527 ] 00:09:22.527 } 00:09:22.527 } 00:09:22.527 }' 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:22.527 BaseBdev2 00:09:22.527 BaseBdev3' 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.527 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.528 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.528 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:22.528 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:22.528 18:50:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:22.528 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.528 18:50:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.788 [2024-11-16 18:50:06.015444] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.788 "name": "Existed_Raid", 00:09:22.788 "uuid": "1ed54159-76c0-475d-9fc6-29597c695968", 00:09:22.788 "strip_size_kb": 0, 00:09:22.788 "state": "online", 00:09:22.788 "raid_level": "raid1", 00:09:22.788 "superblock": true, 00:09:22.788 "num_base_bdevs": 3, 00:09:22.788 "num_base_bdevs_discovered": 2, 00:09:22.788 "num_base_bdevs_operational": 2, 00:09:22.788 "base_bdevs_list": [ 00:09:22.788 { 00:09:22.788 "name": null, 00:09:22.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.788 "is_configured": false, 00:09:22.788 "data_offset": 0, 00:09:22.788 "data_size": 63488 00:09:22.788 }, 00:09:22.788 { 00:09:22.788 "name": "BaseBdev2", 00:09:22.788 "uuid": "9014eb9a-5413-427d-a5b3-4462d13355ac", 00:09:22.788 "is_configured": true, 00:09:22.788 "data_offset": 2048, 00:09:22.788 "data_size": 63488 00:09:22.788 }, 00:09:22.788 { 00:09:22.788 "name": "BaseBdev3", 00:09:22.788 "uuid": "9e69ba4a-5574-4f6c-9864-46a752e90663", 00:09:22.788 "is_configured": true, 00:09:22.788 "data_offset": 2048, 00:09:22.788 "data_size": 63488 00:09:22.788 } 00:09:22.788 ] 00:09:22.788 }' 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.788 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.358 [2024-11-16 18:50:06.647607] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.358 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.358 [2024-11-16 18:50:06.799700] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.358 [2024-11-16 18:50:06.799800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:23.618 [2024-11-16 18:50:06.892221] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:23.618 [2024-11-16 18:50:06.892275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:23.618 [2024-11-16 18:50:06.892286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.618 BaseBdev2 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.618 18:50:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.618 [ 00:09:23.618 { 00:09:23.618 "name": "BaseBdev2", 00:09:23.618 "aliases": [ 00:09:23.618 "0c647931-8d1e-498a-acbd-77f6cb14d3a3" 00:09:23.618 ], 00:09:23.618 "product_name": "Malloc disk", 00:09:23.618 "block_size": 512, 00:09:23.618 "num_blocks": 65536, 00:09:23.618 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:23.618 "assigned_rate_limits": { 00:09:23.618 "rw_ios_per_sec": 0, 00:09:23.618 "rw_mbytes_per_sec": 0, 00:09:23.618 "r_mbytes_per_sec": 0, 00:09:23.618 "w_mbytes_per_sec": 0 00:09:23.618 }, 00:09:23.618 "claimed": false, 00:09:23.618 "zoned": false, 00:09:23.618 "supported_io_types": { 00:09:23.618 "read": true, 00:09:23.618 "write": true, 00:09:23.618 "unmap": true, 00:09:23.618 "flush": true, 00:09:23.618 "reset": true, 00:09:23.618 "nvme_admin": false, 00:09:23.618 "nvme_io": false, 00:09:23.618 "nvme_io_md": false, 00:09:23.618 "write_zeroes": true, 00:09:23.618 "zcopy": true, 00:09:23.618 "get_zone_info": false, 00:09:23.618 "zone_management": false, 00:09:23.618 "zone_append": false, 00:09:23.618 "compare": false, 00:09:23.618 "compare_and_write": false, 00:09:23.618 "abort": true, 00:09:23.618 "seek_hole": false, 00:09:23.618 "seek_data": false, 00:09:23.618 "copy": true, 00:09:23.618 "nvme_iov_md": false 00:09:23.618 }, 00:09:23.618 "memory_domains": [ 00:09:23.618 { 00:09:23.618 "dma_device_id": "system", 00:09:23.618 "dma_device_type": 1 00:09:23.618 }, 00:09:23.618 { 00:09:23.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.618 "dma_device_type": 2 00:09:23.618 } 00:09:23.618 ], 00:09:23.618 "driver_specific": {} 00:09:23.618 } 00:09:23.618 ] 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.618 BaseBdev3 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.618 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.879 [ 00:09:23.879 { 00:09:23.879 "name": "BaseBdev3", 00:09:23.879 "aliases": [ 00:09:23.879 "21601fdf-93d4-4891-8e5d-0d02f20cb47f" 00:09:23.879 ], 00:09:23.879 "product_name": "Malloc disk", 00:09:23.879 "block_size": 512, 00:09:23.879 "num_blocks": 65536, 00:09:23.879 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:23.879 "assigned_rate_limits": { 00:09:23.879 "rw_ios_per_sec": 0, 00:09:23.879 "rw_mbytes_per_sec": 0, 00:09:23.879 "r_mbytes_per_sec": 0, 00:09:23.879 "w_mbytes_per_sec": 0 00:09:23.879 }, 00:09:23.879 "claimed": false, 00:09:23.879 "zoned": false, 00:09:23.879 "supported_io_types": { 00:09:23.879 "read": true, 00:09:23.879 "write": true, 00:09:23.879 "unmap": true, 00:09:23.879 "flush": true, 00:09:23.879 "reset": true, 00:09:23.879 "nvme_admin": false, 00:09:23.879 "nvme_io": false, 00:09:23.879 "nvme_io_md": false, 00:09:23.879 "write_zeroes": true, 00:09:23.879 "zcopy": true, 00:09:23.879 "get_zone_info": false, 00:09:23.879 "zone_management": false, 00:09:23.879 "zone_append": false, 00:09:23.879 "compare": false, 00:09:23.879 "compare_and_write": false, 00:09:23.879 "abort": true, 00:09:23.879 "seek_hole": false, 00:09:23.879 "seek_data": false, 00:09:23.879 "copy": true, 00:09:23.879 "nvme_iov_md": false 00:09:23.879 }, 00:09:23.879 "memory_domains": [ 00:09:23.879 { 00:09:23.879 "dma_device_id": "system", 00:09:23.879 "dma_device_type": 1 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.879 "dma_device_type": 2 00:09:23.879 } 00:09:23.879 ], 00:09:23.879 "driver_specific": {} 00:09:23.879 } 00:09:23.879 ] 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.879 [2024-11-16 18:50:07.111043] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:23.879 [2024-11-16 18:50:07.111148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:23.879 [2024-11-16 18:50:07.111189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.879 [2024-11-16 18:50:07.112970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.879 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.879 "name": "Existed_Raid", 00:09:23.879 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:23.879 "strip_size_kb": 0, 00:09:23.879 "state": "configuring", 00:09:23.879 "raid_level": "raid1", 00:09:23.879 "superblock": true, 00:09:23.879 "num_base_bdevs": 3, 00:09:23.879 "num_base_bdevs_discovered": 2, 00:09:23.879 "num_base_bdevs_operational": 3, 00:09:23.879 "base_bdevs_list": [ 00:09:23.879 { 00:09:23.879 "name": "BaseBdev1", 00:09:23.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.879 "is_configured": false, 00:09:23.879 "data_offset": 0, 00:09:23.879 "data_size": 0 00:09:23.879 }, 00:09:23.879 { 00:09:23.879 "name": "BaseBdev2", 00:09:23.879 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:23.879 "is_configured": true, 00:09:23.879 "data_offset": 2048, 00:09:23.879 "data_size": 63488 00:09:23.879 }, 00:09:23.879 { 00:09:23.880 "name": "BaseBdev3", 00:09:23.880 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:23.880 "is_configured": true, 00:09:23.880 "data_offset": 2048, 00:09:23.880 "data_size": 63488 00:09:23.880 } 00:09:23.880 ] 00:09:23.880 }' 00:09:23.880 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.880 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 [2024-11-16 18:50:07.510384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.140 "name": "Existed_Raid", 00:09:24.140 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:24.140 "strip_size_kb": 0, 00:09:24.140 "state": "configuring", 00:09:24.140 "raid_level": "raid1", 00:09:24.140 "superblock": true, 00:09:24.140 "num_base_bdevs": 3, 00:09:24.140 "num_base_bdevs_discovered": 1, 00:09:24.140 "num_base_bdevs_operational": 3, 00:09:24.140 "base_bdevs_list": [ 00:09:24.140 { 00:09:24.140 "name": "BaseBdev1", 00:09:24.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.140 "is_configured": false, 00:09:24.140 "data_offset": 0, 00:09:24.140 "data_size": 0 00:09:24.140 }, 00:09:24.140 { 00:09:24.140 "name": null, 00:09:24.140 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:24.140 "is_configured": false, 00:09:24.140 "data_offset": 0, 00:09:24.140 "data_size": 63488 00:09:24.140 }, 00:09:24.140 { 00:09:24.140 "name": "BaseBdev3", 00:09:24.140 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:24.140 "is_configured": true, 00:09:24.140 "data_offset": 2048, 00:09:24.140 "data_size": 63488 00:09:24.140 } 00:09:24.140 ] 00:09:24.140 }' 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.140 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.706 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.706 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.706 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:24.706 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.706 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.706 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:24.706 18:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:24.706 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.706 18:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.706 [2024-11-16 18:50:08.005095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.706 BaseBdev1 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.706 [ 00:09:24.706 { 00:09:24.706 "name": "BaseBdev1", 00:09:24.706 "aliases": [ 00:09:24.706 "65f89a15-33be-44d4-bb0c-69e72c13d95d" 00:09:24.706 ], 00:09:24.706 "product_name": "Malloc disk", 00:09:24.706 "block_size": 512, 00:09:24.706 "num_blocks": 65536, 00:09:24.706 "uuid": "65f89a15-33be-44d4-bb0c-69e72c13d95d", 00:09:24.706 "assigned_rate_limits": { 00:09:24.706 "rw_ios_per_sec": 0, 00:09:24.706 "rw_mbytes_per_sec": 0, 00:09:24.706 "r_mbytes_per_sec": 0, 00:09:24.706 "w_mbytes_per_sec": 0 00:09:24.706 }, 00:09:24.706 "claimed": true, 00:09:24.706 "claim_type": "exclusive_write", 00:09:24.706 "zoned": false, 00:09:24.706 "supported_io_types": { 00:09:24.706 "read": true, 00:09:24.706 "write": true, 00:09:24.706 "unmap": true, 00:09:24.706 "flush": true, 00:09:24.706 "reset": true, 00:09:24.706 "nvme_admin": false, 00:09:24.706 "nvme_io": false, 00:09:24.706 "nvme_io_md": false, 00:09:24.706 "write_zeroes": true, 00:09:24.706 "zcopy": true, 00:09:24.706 "get_zone_info": false, 00:09:24.706 "zone_management": false, 00:09:24.706 "zone_append": false, 00:09:24.706 "compare": false, 00:09:24.706 "compare_and_write": false, 00:09:24.706 "abort": true, 00:09:24.706 "seek_hole": false, 00:09:24.706 "seek_data": false, 00:09:24.706 "copy": true, 00:09:24.706 "nvme_iov_md": false 00:09:24.706 }, 00:09:24.706 "memory_domains": [ 00:09:24.706 { 00:09:24.706 "dma_device_id": "system", 00:09:24.706 "dma_device_type": 1 00:09:24.706 }, 00:09:24.706 { 00:09:24.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.706 "dma_device_type": 2 00:09:24.706 } 00:09:24.706 ], 00:09:24.706 "driver_specific": {} 00:09:24.706 } 00:09:24.706 ] 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.706 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.707 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.707 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.707 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.707 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.707 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.707 "name": "Existed_Raid", 00:09:24.707 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:24.707 "strip_size_kb": 0, 00:09:24.707 "state": "configuring", 00:09:24.707 "raid_level": "raid1", 00:09:24.707 "superblock": true, 00:09:24.707 "num_base_bdevs": 3, 00:09:24.707 "num_base_bdevs_discovered": 2, 00:09:24.707 "num_base_bdevs_operational": 3, 00:09:24.707 "base_bdevs_list": [ 00:09:24.707 { 00:09:24.707 "name": "BaseBdev1", 00:09:24.707 "uuid": "65f89a15-33be-44d4-bb0c-69e72c13d95d", 00:09:24.707 "is_configured": true, 00:09:24.707 "data_offset": 2048, 00:09:24.707 "data_size": 63488 00:09:24.707 }, 00:09:24.707 { 00:09:24.707 "name": null, 00:09:24.707 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:24.707 "is_configured": false, 00:09:24.707 "data_offset": 0, 00:09:24.707 "data_size": 63488 00:09:24.707 }, 00:09:24.707 { 00:09:24.707 "name": "BaseBdev3", 00:09:24.707 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:24.707 "is_configured": true, 00:09:24.707 "data_offset": 2048, 00:09:24.707 "data_size": 63488 00:09:24.707 } 00:09:24.707 ] 00:09:24.707 }' 00:09:24.707 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.707 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.275 [2024-11-16 18:50:08.544232] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.275 "name": "Existed_Raid", 00:09:25.275 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:25.275 "strip_size_kb": 0, 00:09:25.275 "state": "configuring", 00:09:25.275 "raid_level": "raid1", 00:09:25.275 "superblock": true, 00:09:25.275 "num_base_bdevs": 3, 00:09:25.275 "num_base_bdevs_discovered": 1, 00:09:25.275 "num_base_bdevs_operational": 3, 00:09:25.275 "base_bdevs_list": [ 00:09:25.275 { 00:09:25.275 "name": "BaseBdev1", 00:09:25.275 "uuid": "65f89a15-33be-44d4-bb0c-69e72c13d95d", 00:09:25.275 "is_configured": true, 00:09:25.275 "data_offset": 2048, 00:09:25.275 "data_size": 63488 00:09:25.275 }, 00:09:25.275 { 00:09:25.275 "name": null, 00:09:25.275 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:25.275 "is_configured": false, 00:09:25.275 "data_offset": 0, 00:09:25.275 "data_size": 63488 00:09:25.275 }, 00:09:25.275 { 00:09:25.275 "name": null, 00:09:25.275 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:25.275 "is_configured": false, 00:09:25.275 "data_offset": 0, 00:09:25.275 "data_size": 63488 00:09:25.275 } 00:09:25.275 ] 00:09:25.275 }' 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.275 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.533 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.533 18:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:25.533 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.533 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.533 18:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.792 [2024-11-16 18:50:09.027446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.792 "name": "Existed_Raid", 00:09:25.792 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:25.792 "strip_size_kb": 0, 00:09:25.792 "state": "configuring", 00:09:25.792 "raid_level": "raid1", 00:09:25.792 "superblock": true, 00:09:25.792 "num_base_bdevs": 3, 00:09:25.792 "num_base_bdevs_discovered": 2, 00:09:25.792 "num_base_bdevs_operational": 3, 00:09:25.792 "base_bdevs_list": [ 00:09:25.792 { 00:09:25.792 "name": "BaseBdev1", 00:09:25.792 "uuid": "65f89a15-33be-44d4-bb0c-69e72c13d95d", 00:09:25.792 "is_configured": true, 00:09:25.792 "data_offset": 2048, 00:09:25.792 "data_size": 63488 00:09:25.792 }, 00:09:25.792 { 00:09:25.792 "name": null, 00:09:25.792 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:25.792 "is_configured": false, 00:09:25.792 "data_offset": 0, 00:09:25.792 "data_size": 63488 00:09:25.792 }, 00:09:25.792 { 00:09:25.792 "name": "BaseBdev3", 00:09:25.792 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:25.792 "is_configured": true, 00:09:25.792 "data_offset": 2048, 00:09:25.792 "data_size": 63488 00:09:25.792 } 00:09:25.792 ] 00:09:25.792 }' 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.792 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.052 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.052 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.052 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:26.052 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.052 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.052 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:26.052 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:26.052 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.052 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.052 [2024-11-16 18:50:09.442740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.313 "name": "Existed_Raid", 00:09:26.313 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:26.313 "strip_size_kb": 0, 00:09:26.313 "state": "configuring", 00:09:26.313 "raid_level": "raid1", 00:09:26.313 "superblock": true, 00:09:26.313 "num_base_bdevs": 3, 00:09:26.313 "num_base_bdevs_discovered": 1, 00:09:26.313 "num_base_bdevs_operational": 3, 00:09:26.313 "base_bdevs_list": [ 00:09:26.313 { 00:09:26.313 "name": null, 00:09:26.313 "uuid": "65f89a15-33be-44d4-bb0c-69e72c13d95d", 00:09:26.313 "is_configured": false, 00:09:26.313 "data_offset": 0, 00:09:26.313 "data_size": 63488 00:09:26.313 }, 00:09:26.313 { 00:09:26.313 "name": null, 00:09:26.313 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:26.313 "is_configured": false, 00:09:26.313 "data_offset": 0, 00:09:26.313 "data_size": 63488 00:09:26.313 }, 00:09:26.313 { 00:09:26.313 "name": "BaseBdev3", 00:09:26.313 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:26.313 "is_configured": true, 00:09:26.313 "data_offset": 2048, 00:09:26.313 "data_size": 63488 00:09:26.313 } 00:09:26.313 ] 00:09:26.313 }' 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.313 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.576 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.576 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.576 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.576 18:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:26.576 18:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.576 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:26.576 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:26.576 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.576 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.576 [2024-11-16 18:50:10.031733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:26.576 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.576 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:26.576 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.576 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.577 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.835 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.835 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.835 "name": "Existed_Raid", 00:09:26.835 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:26.835 "strip_size_kb": 0, 00:09:26.835 "state": "configuring", 00:09:26.835 "raid_level": "raid1", 00:09:26.835 "superblock": true, 00:09:26.835 "num_base_bdevs": 3, 00:09:26.835 "num_base_bdevs_discovered": 2, 00:09:26.835 "num_base_bdevs_operational": 3, 00:09:26.835 "base_bdevs_list": [ 00:09:26.835 { 00:09:26.835 "name": null, 00:09:26.835 "uuid": "65f89a15-33be-44d4-bb0c-69e72c13d95d", 00:09:26.835 "is_configured": false, 00:09:26.835 "data_offset": 0, 00:09:26.835 "data_size": 63488 00:09:26.835 }, 00:09:26.835 { 00:09:26.835 "name": "BaseBdev2", 00:09:26.835 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:26.835 "is_configured": true, 00:09:26.835 "data_offset": 2048, 00:09:26.835 "data_size": 63488 00:09:26.835 }, 00:09:26.835 { 00:09:26.835 "name": "BaseBdev3", 00:09:26.835 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:26.835 "is_configured": true, 00:09:26.835 "data_offset": 2048, 00:09:26.835 "data_size": 63488 00:09:26.835 } 00:09:26.835 ] 00:09:26.835 }' 00:09:26.835 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.835 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:27.094 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 65f89a15-33be-44d4-bb0c-69e72c13d95d 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 [2024-11-16 18:50:10.602169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:27.353 [2024-11-16 18:50:10.602384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:27.353 [2024-11-16 18:50:10.602397] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:27.353 [2024-11-16 18:50:10.602677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:27.353 [2024-11-16 18:50:10.602836] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:27.353 [2024-11-16 18:50:10.602852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:27.353 NewBaseBdev 00:09:27.353 [2024-11-16 18:50:10.602979] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 [ 00:09:27.353 { 00:09:27.353 "name": "NewBaseBdev", 00:09:27.353 "aliases": [ 00:09:27.353 "65f89a15-33be-44d4-bb0c-69e72c13d95d" 00:09:27.353 ], 00:09:27.353 "product_name": "Malloc disk", 00:09:27.353 "block_size": 512, 00:09:27.353 "num_blocks": 65536, 00:09:27.353 "uuid": "65f89a15-33be-44d4-bb0c-69e72c13d95d", 00:09:27.353 "assigned_rate_limits": { 00:09:27.353 "rw_ios_per_sec": 0, 00:09:27.353 "rw_mbytes_per_sec": 0, 00:09:27.353 "r_mbytes_per_sec": 0, 00:09:27.353 "w_mbytes_per_sec": 0 00:09:27.353 }, 00:09:27.353 "claimed": true, 00:09:27.353 "claim_type": "exclusive_write", 00:09:27.353 "zoned": false, 00:09:27.353 "supported_io_types": { 00:09:27.353 "read": true, 00:09:27.353 "write": true, 00:09:27.353 "unmap": true, 00:09:27.353 "flush": true, 00:09:27.353 "reset": true, 00:09:27.353 "nvme_admin": false, 00:09:27.353 "nvme_io": false, 00:09:27.353 "nvme_io_md": false, 00:09:27.353 "write_zeroes": true, 00:09:27.353 "zcopy": true, 00:09:27.353 "get_zone_info": false, 00:09:27.353 "zone_management": false, 00:09:27.353 "zone_append": false, 00:09:27.353 "compare": false, 00:09:27.353 "compare_and_write": false, 00:09:27.353 "abort": true, 00:09:27.353 "seek_hole": false, 00:09:27.353 "seek_data": false, 00:09:27.353 "copy": true, 00:09:27.353 "nvme_iov_md": false 00:09:27.353 }, 00:09:27.353 "memory_domains": [ 00:09:27.353 { 00:09:27.353 "dma_device_id": "system", 00:09:27.353 "dma_device_type": 1 00:09:27.353 }, 00:09:27.353 { 00:09:27.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.353 "dma_device_type": 2 00:09:27.353 } 00:09:27.353 ], 00:09:27.353 "driver_specific": {} 00:09:27.353 } 00:09:27.353 ] 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.353 "name": "Existed_Raid", 00:09:27.353 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:27.353 "strip_size_kb": 0, 00:09:27.353 "state": "online", 00:09:27.353 "raid_level": "raid1", 00:09:27.353 "superblock": true, 00:09:27.353 "num_base_bdevs": 3, 00:09:27.353 "num_base_bdevs_discovered": 3, 00:09:27.353 "num_base_bdevs_operational": 3, 00:09:27.353 "base_bdevs_list": [ 00:09:27.353 { 00:09:27.353 "name": "NewBaseBdev", 00:09:27.353 "uuid": "65f89a15-33be-44d4-bb0c-69e72c13d95d", 00:09:27.353 "is_configured": true, 00:09:27.353 "data_offset": 2048, 00:09:27.353 "data_size": 63488 00:09:27.353 }, 00:09:27.353 { 00:09:27.353 "name": "BaseBdev2", 00:09:27.353 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:27.353 "is_configured": true, 00:09:27.353 "data_offset": 2048, 00:09:27.353 "data_size": 63488 00:09:27.353 }, 00:09:27.353 { 00:09:27.353 "name": "BaseBdev3", 00:09:27.353 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:27.353 "is_configured": true, 00:09:27.353 "data_offset": 2048, 00:09:27.353 "data_size": 63488 00:09:27.353 } 00:09:27.353 ] 00:09:27.353 }' 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.353 18:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.613 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.613 [2024-11-16 18:50:11.065733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.871 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.871 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:27.871 "name": "Existed_Raid", 00:09:27.871 "aliases": [ 00:09:27.871 "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05" 00:09:27.871 ], 00:09:27.871 "product_name": "Raid Volume", 00:09:27.871 "block_size": 512, 00:09:27.871 "num_blocks": 63488, 00:09:27.871 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:27.871 "assigned_rate_limits": { 00:09:27.871 "rw_ios_per_sec": 0, 00:09:27.871 "rw_mbytes_per_sec": 0, 00:09:27.871 "r_mbytes_per_sec": 0, 00:09:27.871 "w_mbytes_per_sec": 0 00:09:27.871 }, 00:09:27.871 "claimed": false, 00:09:27.871 "zoned": false, 00:09:27.871 "supported_io_types": { 00:09:27.871 "read": true, 00:09:27.871 "write": true, 00:09:27.871 "unmap": false, 00:09:27.871 "flush": false, 00:09:27.871 "reset": true, 00:09:27.871 "nvme_admin": false, 00:09:27.871 "nvme_io": false, 00:09:27.871 "nvme_io_md": false, 00:09:27.871 "write_zeroes": true, 00:09:27.871 "zcopy": false, 00:09:27.871 "get_zone_info": false, 00:09:27.871 "zone_management": false, 00:09:27.871 "zone_append": false, 00:09:27.871 "compare": false, 00:09:27.871 "compare_and_write": false, 00:09:27.871 "abort": false, 00:09:27.871 "seek_hole": false, 00:09:27.871 "seek_data": false, 00:09:27.871 "copy": false, 00:09:27.871 "nvme_iov_md": false 00:09:27.871 }, 00:09:27.871 "memory_domains": [ 00:09:27.871 { 00:09:27.871 "dma_device_id": "system", 00:09:27.871 "dma_device_type": 1 00:09:27.871 }, 00:09:27.871 { 00:09:27.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.871 "dma_device_type": 2 00:09:27.871 }, 00:09:27.871 { 00:09:27.871 "dma_device_id": "system", 00:09:27.871 "dma_device_type": 1 00:09:27.871 }, 00:09:27.871 { 00:09:27.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.871 "dma_device_type": 2 00:09:27.871 }, 00:09:27.871 { 00:09:27.871 "dma_device_id": "system", 00:09:27.871 "dma_device_type": 1 00:09:27.871 }, 00:09:27.871 { 00:09:27.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.871 "dma_device_type": 2 00:09:27.872 } 00:09:27.872 ], 00:09:27.872 "driver_specific": { 00:09:27.872 "raid": { 00:09:27.872 "uuid": "7dacabf9-5a8f-43cb-acd5-f68b9a37ae05", 00:09:27.872 "strip_size_kb": 0, 00:09:27.872 "state": "online", 00:09:27.872 "raid_level": "raid1", 00:09:27.872 "superblock": true, 00:09:27.872 "num_base_bdevs": 3, 00:09:27.872 "num_base_bdevs_discovered": 3, 00:09:27.872 "num_base_bdevs_operational": 3, 00:09:27.872 "base_bdevs_list": [ 00:09:27.872 { 00:09:27.872 "name": "NewBaseBdev", 00:09:27.872 "uuid": "65f89a15-33be-44d4-bb0c-69e72c13d95d", 00:09:27.872 "is_configured": true, 00:09:27.872 "data_offset": 2048, 00:09:27.872 "data_size": 63488 00:09:27.872 }, 00:09:27.872 { 00:09:27.872 "name": "BaseBdev2", 00:09:27.872 "uuid": "0c647931-8d1e-498a-acbd-77f6cb14d3a3", 00:09:27.872 "is_configured": true, 00:09:27.872 "data_offset": 2048, 00:09:27.872 "data_size": 63488 00:09:27.872 }, 00:09:27.872 { 00:09:27.872 "name": "BaseBdev3", 00:09:27.872 "uuid": "21601fdf-93d4-4891-8e5d-0d02f20cb47f", 00:09:27.872 "is_configured": true, 00:09:27.872 "data_offset": 2048, 00:09:27.872 "data_size": 63488 00:09:27.872 } 00:09:27.872 ] 00:09:27.872 } 00:09:27.872 } 00:09:27.872 }' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:27.872 BaseBdev2 00:09:27.872 BaseBdev3' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.872 [2024-11-16 18:50:11.304966] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.872 [2024-11-16 18:50:11.305018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:27.872 [2024-11-16 18:50:11.305082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:27.872 [2024-11-16 18:50:11.305355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:27.872 [2024-11-16 18:50:11.305376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67826 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67826 ']' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67826 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.872 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67826 00:09:28.130 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.130 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.130 killing process with pid 67826 00:09:28.130 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67826' 00:09:28.130 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67826 00:09:28.130 [2024-11-16 18:50:11.346335] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:28.130 18:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67826 00:09:28.388 [2024-11-16 18:50:11.639159] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.327 18:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:29.327 00:09:29.327 real 0m10.246s 00:09:29.327 user 0m16.221s 00:09:29.327 sys 0m1.873s 00:09:29.327 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.327 18:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.327 ************************************ 00:09:29.327 END TEST raid_state_function_test_sb 00:09:29.327 ************************************ 00:09:29.327 18:50:12 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:09:29.327 18:50:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:29.327 18:50:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.327 18:50:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:29.327 ************************************ 00:09:29.327 START TEST raid_superblock_test 00:09:29.327 ************************************ 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68446 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68446 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68446 ']' 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:29.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:29.327 18:50:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.587 [2024-11-16 18:50:12.841285] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:29.587 [2024-11-16 18:50:12.841419] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68446 ] 00:09:29.587 [2024-11-16 18:50:12.993200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:29.846 [2024-11-16 18:50:13.098856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.846 [2024-11-16 18:50:13.281067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:29.846 [2024-11-16 18:50:13.281102] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.417 malloc1 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.417 [2024-11-16 18:50:13.702152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:30.417 [2024-11-16 18:50:13.702231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.417 [2024-11-16 18:50:13.702255] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:30.417 [2024-11-16 18:50:13.702264] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.417 [2024-11-16 18:50:13.704297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.417 [2024-11-16 18:50:13.704334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:30.417 pt1 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.417 malloc2 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.417 [2024-11-16 18:50:13.756210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:30.417 [2024-11-16 18:50:13.756266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.417 [2024-11-16 18:50:13.756288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:30.417 [2024-11-16 18:50:13.756297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.417 [2024-11-16 18:50:13.758392] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.417 [2024-11-16 18:50:13.758426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:30.417 pt2 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.417 malloc3 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.417 [2024-11-16 18:50:13.832393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:30.417 [2024-11-16 18:50:13.832443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.417 [2024-11-16 18:50:13.832479] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:30.417 [2024-11-16 18:50:13.832487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.417 [2024-11-16 18:50:13.834460] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.417 [2024-11-16 18:50:13.834493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:30.417 pt3 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.417 [2024-11-16 18:50:13.844420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:30.417 [2024-11-16 18:50:13.846191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.417 [2024-11-16 18:50:13.846257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:30.417 [2024-11-16 18:50:13.846420] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:30.417 [2024-11-16 18:50:13.846445] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:30.417 [2024-11-16 18:50:13.846684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:30.417 [2024-11-16 18:50:13.846855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:30.417 [2024-11-16 18:50:13.846872] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:30.417 [2024-11-16 18:50:13.847018] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.417 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.677 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.677 "name": "raid_bdev1", 00:09:30.677 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:30.677 "strip_size_kb": 0, 00:09:30.677 "state": "online", 00:09:30.677 "raid_level": "raid1", 00:09:30.677 "superblock": true, 00:09:30.677 "num_base_bdevs": 3, 00:09:30.677 "num_base_bdevs_discovered": 3, 00:09:30.677 "num_base_bdevs_operational": 3, 00:09:30.677 "base_bdevs_list": [ 00:09:30.677 { 00:09:30.677 "name": "pt1", 00:09:30.677 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.677 "is_configured": true, 00:09:30.677 "data_offset": 2048, 00:09:30.677 "data_size": 63488 00:09:30.677 }, 00:09:30.677 { 00:09:30.677 "name": "pt2", 00:09:30.677 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.677 "is_configured": true, 00:09:30.677 "data_offset": 2048, 00:09:30.677 "data_size": 63488 00:09:30.677 }, 00:09:30.677 { 00:09:30.677 "name": "pt3", 00:09:30.677 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.677 "is_configured": true, 00:09:30.677 "data_offset": 2048, 00:09:30.677 "data_size": 63488 00:09:30.677 } 00:09:30.677 ] 00:09:30.677 }' 00:09:30.677 18:50:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.677 18:50:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.937 [2024-11-16 18:50:14.212109] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.937 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:30.937 "name": "raid_bdev1", 00:09:30.937 "aliases": [ 00:09:30.937 "92cfb832-0b98-47d0-893b-b92ba82347a0" 00:09:30.937 ], 00:09:30.937 "product_name": "Raid Volume", 00:09:30.937 "block_size": 512, 00:09:30.937 "num_blocks": 63488, 00:09:30.938 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:30.938 "assigned_rate_limits": { 00:09:30.938 "rw_ios_per_sec": 0, 00:09:30.938 "rw_mbytes_per_sec": 0, 00:09:30.938 "r_mbytes_per_sec": 0, 00:09:30.938 "w_mbytes_per_sec": 0 00:09:30.938 }, 00:09:30.938 "claimed": false, 00:09:30.938 "zoned": false, 00:09:30.938 "supported_io_types": { 00:09:30.938 "read": true, 00:09:30.938 "write": true, 00:09:30.938 "unmap": false, 00:09:30.938 "flush": false, 00:09:30.938 "reset": true, 00:09:30.938 "nvme_admin": false, 00:09:30.938 "nvme_io": false, 00:09:30.938 "nvme_io_md": false, 00:09:30.938 "write_zeroes": true, 00:09:30.938 "zcopy": false, 00:09:30.938 "get_zone_info": false, 00:09:30.938 "zone_management": false, 00:09:30.938 "zone_append": false, 00:09:30.938 "compare": false, 00:09:30.938 "compare_and_write": false, 00:09:30.938 "abort": false, 00:09:30.938 "seek_hole": false, 00:09:30.938 "seek_data": false, 00:09:30.938 "copy": false, 00:09:30.938 "nvme_iov_md": false 00:09:30.938 }, 00:09:30.938 "memory_domains": [ 00:09:30.938 { 00:09:30.938 "dma_device_id": "system", 00:09:30.938 "dma_device_type": 1 00:09:30.938 }, 00:09:30.938 { 00:09:30.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.938 "dma_device_type": 2 00:09:30.938 }, 00:09:30.938 { 00:09:30.938 "dma_device_id": "system", 00:09:30.938 "dma_device_type": 1 00:09:30.938 }, 00:09:30.938 { 00:09:30.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.938 "dma_device_type": 2 00:09:30.938 }, 00:09:30.938 { 00:09:30.938 "dma_device_id": "system", 00:09:30.938 "dma_device_type": 1 00:09:30.938 }, 00:09:30.938 { 00:09:30.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:30.938 "dma_device_type": 2 00:09:30.938 } 00:09:30.938 ], 00:09:30.938 "driver_specific": { 00:09:30.938 "raid": { 00:09:30.938 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:30.938 "strip_size_kb": 0, 00:09:30.938 "state": "online", 00:09:30.938 "raid_level": "raid1", 00:09:30.938 "superblock": true, 00:09:30.938 "num_base_bdevs": 3, 00:09:30.938 "num_base_bdevs_discovered": 3, 00:09:30.938 "num_base_bdevs_operational": 3, 00:09:30.938 "base_bdevs_list": [ 00:09:30.938 { 00:09:30.938 "name": "pt1", 00:09:30.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.938 "is_configured": true, 00:09:30.938 "data_offset": 2048, 00:09:30.938 "data_size": 63488 00:09:30.938 }, 00:09:30.938 { 00:09:30.938 "name": "pt2", 00:09:30.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.938 "is_configured": true, 00:09:30.938 "data_offset": 2048, 00:09:30.938 "data_size": 63488 00:09:30.938 }, 00:09:30.938 { 00:09:30.938 "name": "pt3", 00:09:30.938 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.938 "is_configured": true, 00:09:30.938 "data_offset": 2048, 00:09:30.938 "data_size": 63488 00:09:30.938 } 00:09:30.938 ] 00:09:30.938 } 00:09:30.938 } 00:09:30.938 }' 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:30.938 pt2 00:09:30.938 pt3' 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.938 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.199 [2024-11-16 18:50:14.491532] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=92cfb832-0b98-47d0-893b-b92ba82347a0 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 92cfb832-0b98-47d0-893b-b92ba82347a0 ']' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.199 [2024-11-16 18:50:14.523229] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.199 [2024-11-16 18:50:14.523258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:31.199 [2024-11-16 18:50:14.523323] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.199 [2024-11-16 18:50:14.523407] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.199 [2024-11-16 18:50:14.523421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:31.199 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:31.200 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.200 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.200 [2024-11-16 18:50:14.667040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:31.200 [2024-11-16 18:50:14.668808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:31.200 [2024-11-16 18:50:14.668859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:31.200 [2024-11-16 18:50:14.668905] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:31.200 [2024-11-16 18:50:14.668949] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:31.200 [2024-11-16 18:50:14.668966] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:31.200 [2024-11-16 18:50:14.668981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:31.200 [2024-11-16 18:50:14.668990] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:31.459 request: 00:09:31.459 { 00:09:31.459 "name": "raid_bdev1", 00:09:31.459 "raid_level": "raid1", 00:09:31.459 "base_bdevs": [ 00:09:31.459 "malloc1", 00:09:31.459 "malloc2", 00:09:31.459 "malloc3" 00:09:31.459 ], 00:09:31.459 "superblock": false, 00:09:31.459 "method": "bdev_raid_create", 00:09:31.459 "req_id": 1 00:09:31.459 } 00:09:31.459 Got JSON-RPC error response 00:09:31.459 response: 00:09:31.459 { 00:09:31.459 "code": -17, 00:09:31.459 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:31.459 } 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.460 [2024-11-16 18:50:14.722907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:31.460 [2024-11-16 18:50:14.722982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.460 [2024-11-16 18:50:14.723004] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:31.460 [2024-11-16 18:50:14.723012] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.460 [2024-11-16 18:50:14.725081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.460 [2024-11-16 18:50:14.725117] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:31.460 [2024-11-16 18:50:14.725199] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:31.460 [2024-11-16 18:50:14.725243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:31.460 pt1 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.460 "name": "raid_bdev1", 00:09:31.460 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:31.460 "strip_size_kb": 0, 00:09:31.460 "state": "configuring", 00:09:31.460 "raid_level": "raid1", 00:09:31.460 "superblock": true, 00:09:31.460 "num_base_bdevs": 3, 00:09:31.460 "num_base_bdevs_discovered": 1, 00:09:31.460 "num_base_bdevs_operational": 3, 00:09:31.460 "base_bdevs_list": [ 00:09:31.460 { 00:09:31.460 "name": "pt1", 00:09:31.460 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.460 "is_configured": true, 00:09:31.460 "data_offset": 2048, 00:09:31.460 "data_size": 63488 00:09:31.460 }, 00:09:31.460 { 00:09:31.460 "name": null, 00:09:31.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.460 "is_configured": false, 00:09:31.460 "data_offset": 2048, 00:09:31.460 "data_size": 63488 00:09:31.460 }, 00:09:31.460 { 00:09:31.460 "name": null, 00:09:31.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.460 "is_configured": false, 00:09:31.460 "data_offset": 2048, 00:09:31.460 "data_size": 63488 00:09:31.460 } 00:09:31.460 ] 00:09:31.460 }' 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.460 18:50:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.722 [2024-11-16 18:50:15.110273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:31.722 [2024-11-16 18:50:15.110330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:31.722 [2024-11-16 18:50:15.110355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:31.722 [2024-11-16 18:50:15.110366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:31.722 [2024-11-16 18:50:15.110778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:31.722 [2024-11-16 18:50:15.110801] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:31.722 [2024-11-16 18:50:15.110883] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:31.722 [2024-11-16 18:50:15.110906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:31.722 pt2 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.722 [2024-11-16 18:50:15.122257] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:31.722 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:31.723 "name": "raid_bdev1", 00:09:31.723 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:31.723 "strip_size_kb": 0, 00:09:31.723 "state": "configuring", 00:09:31.723 "raid_level": "raid1", 00:09:31.723 "superblock": true, 00:09:31.723 "num_base_bdevs": 3, 00:09:31.723 "num_base_bdevs_discovered": 1, 00:09:31.723 "num_base_bdevs_operational": 3, 00:09:31.723 "base_bdevs_list": [ 00:09:31.723 { 00:09:31.723 "name": "pt1", 00:09:31.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.723 "is_configured": true, 00:09:31.723 "data_offset": 2048, 00:09:31.723 "data_size": 63488 00:09:31.723 }, 00:09:31.723 { 00:09:31.723 "name": null, 00:09:31.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.723 "is_configured": false, 00:09:31.723 "data_offset": 0, 00:09:31.723 "data_size": 63488 00:09:31.723 }, 00:09:31.723 { 00:09:31.723 "name": null, 00:09:31.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.723 "is_configured": false, 00:09:31.723 "data_offset": 2048, 00:09:31.723 "data_size": 63488 00:09:31.723 } 00:09:31.723 ] 00:09:31.723 }' 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:31.723 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.308 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:32.308 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:32.308 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:32.308 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.308 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.308 [2024-11-16 18:50:15.557517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:32.308 [2024-11-16 18:50:15.557589] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.308 [2024-11-16 18:50:15.557611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:32.308 [2024-11-16 18:50:15.557625] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.308 [2024-11-16 18:50:15.558075] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.308 [2024-11-16 18:50:15.558102] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:32.308 [2024-11-16 18:50:15.558184] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:32.308 [2024-11-16 18:50:15.558229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:32.308 pt2 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.309 [2024-11-16 18:50:15.569472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:32.309 [2024-11-16 18:50:15.569526] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:32.309 [2024-11-16 18:50:15.569546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:32.309 [2024-11-16 18:50:15.569558] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:32.309 [2024-11-16 18:50:15.569955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:32.309 [2024-11-16 18:50:15.569984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:32.309 [2024-11-16 18:50:15.570048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:32.309 [2024-11-16 18:50:15.570074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:32.309 [2024-11-16 18:50:15.570206] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:32.309 [2024-11-16 18:50:15.570223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:32.309 [2024-11-16 18:50:15.570456] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:32.309 [2024-11-16 18:50:15.570618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:32.309 [2024-11-16 18:50:15.570634] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:32.309 [2024-11-16 18:50:15.570797] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.309 pt3 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.309 "name": "raid_bdev1", 00:09:32.309 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:32.309 "strip_size_kb": 0, 00:09:32.309 "state": "online", 00:09:32.309 "raid_level": "raid1", 00:09:32.309 "superblock": true, 00:09:32.309 "num_base_bdevs": 3, 00:09:32.309 "num_base_bdevs_discovered": 3, 00:09:32.309 "num_base_bdevs_operational": 3, 00:09:32.309 "base_bdevs_list": [ 00:09:32.309 { 00:09:32.309 "name": "pt1", 00:09:32.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.309 "is_configured": true, 00:09:32.309 "data_offset": 2048, 00:09:32.309 "data_size": 63488 00:09:32.309 }, 00:09:32.309 { 00:09:32.309 "name": "pt2", 00:09:32.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.309 "is_configured": true, 00:09:32.309 "data_offset": 2048, 00:09:32.309 "data_size": 63488 00:09:32.309 }, 00:09:32.309 { 00:09:32.309 "name": "pt3", 00:09:32.309 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.309 "is_configured": true, 00:09:32.309 "data_offset": 2048, 00:09:32.309 "data_size": 63488 00:09:32.309 } 00:09:32.309 ] 00:09:32.309 }' 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.309 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.574 18:50:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:32.574 [2024-11-16 18:50:15.997073] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.574 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.574 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:32.574 "name": "raid_bdev1", 00:09:32.574 "aliases": [ 00:09:32.574 "92cfb832-0b98-47d0-893b-b92ba82347a0" 00:09:32.574 ], 00:09:32.574 "product_name": "Raid Volume", 00:09:32.574 "block_size": 512, 00:09:32.574 "num_blocks": 63488, 00:09:32.574 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:32.574 "assigned_rate_limits": { 00:09:32.574 "rw_ios_per_sec": 0, 00:09:32.574 "rw_mbytes_per_sec": 0, 00:09:32.574 "r_mbytes_per_sec": 0, 00:09:32.574 "w_mbytes_per_sec": 0 00:09:32.574 }, 00:09:32.574 "claimed": false, 00:09:32.574 "zoned": false, 00:09:32.574 "supported_io_types": { 00:09:32.574 "read": true, 00:09:32.574 "write": true, 00:09:32.574 "unmap": false, 00:09:32.574 "flush": false, 00:09:32.574 "reset": true, 00:09:32.574 "nvme_admin": false, 00:09:32.574 "nvme_io": false, 00:09:32.574 "nvme_io_md": false, 00:09:32.574 "write_zeroes": true, 00:09:32.574 "zcopy": false, 00:09:32.574 "get_zone_info": false, 00:09:32.574 "zone_management": false, 00:09:32.574 "zone_append": false, 00:09:32.574 "compare": false, 00:09:32.574 "compare_and_write": false, 00:09:32.574 "abort": false, 00:09:32.574 "seek_hole": false, 00:09:32.574 "seek_data": false, 00:09:32.574 "copy": false, 00:09:32.574 "nvme_iov_md": false 00:09:32.574 }, 00:09:32.574 "memory_domains": [ 00:09:32.574 { 00:09:32.574 "dma_device_id": "system", 00:09:32.574 "dma_device_type": 1 00:09:32.574 }, 00:09:32.574 { 00:09:32.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.574 "dma_device_type": 2 00:09:32.574 }, 00:09:32.574 { 00:09:32.574 "dma_device_id": "system", 00:09:32.574 "dma_device_type": 1 00:09:32.574 }, 00:09:32.574 { 00:09:32.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.574 "dma_device_type": 2 00:09:32.574 }, 00:09:32.574 { 00:09:32.574 "dma_device_id": "system", 00:09:32.574 "dma_device_type": 1 00:09:32.574 }, 00:09:32.574 { 00:09:32.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.574 "dma_device_type": 2 00:09:32.574 } 00:09:32.574 ], 00:09:32.574 "driver_specific": { 00:09:32.574 "raid": { 00:09:32.574 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:32.574 "strip_size_kb": 0, 00:09:32.574 "state": "online", 00:09:32.574 "raid_level": "raid1", 00:09:32.574 "superblock": true, 00:09:32.574 "num_base_bdevs": 3, 00:09:32.574 "num_base_bdevs_discovered": 3, 00:09:32.574 "num_base_bdevs_operational": 3, 00:09:32.574 "base_bdevs_list": [ 00:09:32.574 { 00:09:32.574 "name": "pt1", 00:09:32.574 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:32.574 "is_configured": true, 00:09:32.574 "data_offset": 2048, 00:09:32.574 "data_size": 63488 00:09:32.574 }, 00:09:32.574 { 00:09:32.574 "name": "pt2", 00:09:32.574 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:32.574 "is_configured": true, 00:09:32.574 "data_offset": 2048, 00:09:32.574 "data_size": 63488 00:09:32.574 }, 00:09:32.574 { 00:09:32.574 "name": "pt3", 00:09:32.574 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:32.574 "is_configured": true, 00:09:32.574 "data_offset": 2048, 00:09:32.574 "data_size": 63488 00:09:32.574 } 00:09:32.574 ] 00:09:32.574 } 00:09:32.574 } 00:09:32.574 }' 00:09:32.574 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:32.835 pt2 00:09:32.835 pt3' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:32.835 [2024-11-16 18:50:16.240615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 92cfb832-0b98-47d0-893b-b92ba82347a0 '!=' 92cfb832-0b98-47d0-893b-b92ba82347a0 ']' 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.835 [2024-11-16 18:50:16.288322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.835 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.095 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.095 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.095 "name": "raid_bdev1", 00:09:33.095 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:33.095 "strip_size_kb": 0, 00:09:33.095 "state": "online", 00:09:33.095 "raid_level": "raid1", 00:09:33.095 "superblock": true, 00:09:33.095 "num_base_bdevs": 3, 00:09:33.095 "num_base_bdevs_discovered": 2, 00:09:33.095 "num_base_bdevs_operational": 2, 00:09:33.095 "base_bdevs_list": [ 00:09:33.095 { 00:09:33.095 "name": null, 00:09:33.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.095 "is_configured": false, 00:09:33.095 "data_offset": 0, 00:09:33.095 "data_size": 63488 00:09:33.095 }, 00:09:33.095 { 00:09:33.095 "name": "pt2", 00:09:33.095 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.095 "is_configured": true, 00:09:33.095 "data_offset": 2048, 00:09:33.095 "data_size": 63488 00:09:33.095 }, 00:09:33.095 { 00:09:33.095 "name": "pt3", 00:09:33.095 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.095 "is_configured": true, 00:09:33.095 "data_offset": 2048, 00:09:33.095 "data_size": 63488 00:09:33.095 } 00:09:33.095 ] 00:09:33.095 }' 00:09:33.095 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.095 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.357 [2024-11-16 18:50:16.687598] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:33.357 [2024-11-16 18:50:16.687629] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:33.357 [2024-11-16 18:50:16.687733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:33.357 [2024-11-16 18:50:16.687790] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:33.357 [2024-11-16 18:50:16.687804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.357 [2024-11-16 18:50:16.771418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:33.357 [2024-11-16 18:50:16.771471] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.357 [2024-11-16 18:50:16.771502] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:09:33.357 [2024-11-16 18:50:16.771513] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.357 [2024-11-16 18:50:16.773634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.357 [2024-11-16 18:50:16.773684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:33.357 [2024-11-16 18:50:16.773759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:33.357 [2024-11-16 18:50:16.773809] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:33.357 pt2 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.357 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.616 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.616 "name": "raid_bdev1", 00:09:33.616 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:33.616 "strip_size_kb": 0, 00:09:33.616 "state": "configuring", 00:09:33.617 "raid_level": "raid1", 00:09:33.617 "superblock": true, 00:09:33.617 "num_base_bdevs": 3, 00:09:33.617 "num_base_bdevs_discovered": 1, 00:09:33.617 "num_base_bdevs_operational": 2, 00:09:33.617 "base_bdevs_list": [ 00:09:33.617 { 00:09:33.617 "name": null, 00:09:33.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.617 "is_configured": false, 00:09:33.617 "data_offset": 2048, 00:09:33.617 "data_size": 63488 00:09:33.617 }, 00:09:33.617 { 00:09:33.617 "name": "pt2", 00:09:33.617 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.617 "is_configured": true, 00:09:33.617 "data_offset": 2048, 00:09:33.617 "data_size": 63488 00:09:33.617 }, 00:09:33.617 { 00:09:33.617 "name": null, 00:09:33.617 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.617 "is_configured": false, 00:09:33.617 "data_offset": 2048, 00:09:33.617 "data_size": 63488 00:09:33.617 } 00:09:33.617 ] 00:09:33.617 }' 00:09:33.617 18:50:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.617 18:50:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.875 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:09:33.875 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:33.875 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:09:33.875 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:33.875 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.875 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.875 [2024-11-16 18:50:17.242650] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:33.875 [2024-11-16 18:50:17.242728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:33.876 [2024-11-16 18:50:17.242754] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:09:33.876 [2024-11-16 18:50:17.242769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:33.876 [2024-11-16 18:50:17.243220] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:33.876 [2024-11-16 18:50:17.243247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:33.876 [2024-11-16 18:50:17.243350] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:33.876 [2024-11-16 18:50:17.243381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:33.876 [2024-11-16 18:50:17.243525] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:33.876 [2024-11-16 18:50:17.243541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:33.876 [2024-11-16 18:50:17.243806] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:33.876 [2024-11-16 18:50:17.243981] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:33.876 [2024-11-16 18:50:17.243993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:33.876 [2024-11-16 18:50:17.244144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:33.876 pt3 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.876 "name": "raid_bdev1", 00:09:33.876 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:33.876 "strip_size_kb": 0, 00:09:33.876 "state": "online", 00:09:33.876 "raid_level": "raid1", 00:09:33.876 "superblock": true, 00:09:33.876 "num_base_bdevs": 3, 00:09:33.876 "num_base_bdevs_discovered": 2, 00:09:33.876 "num_base_bdevs_operational": 2, 00:09:33.876 "base_bdevs_list": [ 00:09:33.876 { 00:09:33.876 "name": null, 00:09:33.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.876 "is_configured": false, 00:09:33.876 "data_offset": 2048, 00:09:33.876 "data_size": 63488 00:09:33.876 }, 00:09:33.876 { 00:09:33.876 "name": "pt2", 00:09:33.876 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:33.876 "is_configured": true, 00:09:33.876 "data_offset": 2048, 00:09:33.876 "data_size": 63488 00:09:33.876 }, 00:09:33.876 { 00:09:33.876 "name": "pt3", 00:09:33.876 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:33.876 "is_configured": true, 00:09:33.876 "data_offset": 2048, 00:09:33.876 "data_size": 63488 00:09:33.876 } 00:09:33.876 ] 00:09:33.876 }' 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.876 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 [2024-11-16 18:50:17.669869] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.444 [2024-11-16 18:50:17.669902] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:34.444 [2024-11-16 18:50:17.669974] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:34.444 [2024-11-16 18:50:17.670040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:34.444 [2024-11-16 18:50:17.670053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 [2024-11-16 18:50:17.741791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:34.444 [2024-11-16 18:50:17.741842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.444 [2024-11-16 18:50:17.741874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:09:34.444 [2024-11-16 18:50:17.741882] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.444 [2024-11-16 18:50:17.744014] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.444 [2024-11-16 18:50:17.744051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:34.444 [2024-11-16 18:50:17.744126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:34.444 [2024-11-16 18:50:17.744169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:34.444 [2024-11-16 18:50:17.744292] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:34.444 [2024-11-16 18:50:17.744302] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:34.444 [2024-11-16 18:50:17.744316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:34.444 [2024-11-16 18:50:17.744401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:34.444 pt1 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.444 "name": "raid_bdev1", 00:09:34.444 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:34.444 "strip_size_kb": 0, 00:09:34.444 "state": "configuring", 00:09:34.444 "raid_level": "raid1", 00:09:34.444 "superblock": true, 00:09:34.444 "num_base_bdevs": 3, 00:09:34.444 "num_base_bdevs_discovered": 1, 00:09:34.444 "num_base_bdevs_operational": 2, 00:09:34.444 "base_bdevs_list": [ 00:09:34.444 { 00:09:34.444 "name": null, 00:09:34.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.444 "is_configured": false, 00:09:34.444 "data_offset": 2048, 00:09:34.444 "data_size": 63488 00:09:34.444 }, 00:09:34.444 { 00:09:34.444 "name": "pt2", 00:09:34.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:34.444 "is_configured": true, 00:09:34.444 "data_offset": 2048, 00:09:34.444 "data_size": 63488 00:09:34.444 }, 00:09:34.444 { 00:09:34.444 "name": null, 00:09:34.444 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:34.444 "is_configured": false, 00:09:34.444 "data_offset": 2048, 00:09:34.444 "data_size": 63488 00:09:34.444 } 00:09:34.444 ] 00:09:34.444 }' 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.444 18:50:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 [2024-11-16 18:50:18.228944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:35.014 [2024-11-16 18:50:18.229004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:35.014 [2024-11-16 18:50:18.229029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:09:35.014 [2024-11-16 18:50:18.229056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:35.014 [2024-11-16 18:50:18.229521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:35.014 [2024-11-16 18:50:18.229544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:35.014 [2024-11-16 18:50:18.229621] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:35.014 [2024-11-16 18:50:18.229695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:35.014 [2024-11-16 18:50:18.229825] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:35.014 [2024-11-16 18:50:18.229839] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:35.014 [2024-11-16 18:50:18.230091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:09:35.014 [2024-11-16 18:50:18.230251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:35.014 [2024-11-16 18:50:18.230270] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:35.014 [2024-11-16 18:50:18.230412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:35.014 pt3 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.014 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.014 "name": "raid_bdev1", 00:09:35.014 "uuid": "92cfb832-0b98-47d0-893b-b92ba82347a0", 00:09:35.014 "strip_size_kb": 0, 00:09:35.014 "state": "online", 00:09:35.014 "raid_level": "raid1", 00:09:35.014 "superblock": true, 00:09:35.014 "num_base_bdevs": 3, 00:09:35.014 "num_base_bdevs_discovered": 2, 00:09:35.014 "num_base_bdevs_operational": 2, 00:09:35.014 "base_bdevs_list": [ 00:09:35.014 { 00:09:35.014 "name": null, 00:09:35.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.014 "is_configured": false, 00:09:35.015 "data_offset": 2048, 00:09:35.015 "data_size": 63488 00:09:35.015 }, 00:09:35.015 { 00:09:35.015 "name": "pt2", 00:09:35.015 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:35.015 "is_configured": true, 00:09:35.015 "data_offset": 2048, 00:09:35.015 "data_size": 63488 00:09:35.015 }, 00:09:35.015 { 00:09:35.015 "name": "pt3", 00:09:35.015 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:35.015 "is_configured": true, 00:09:35.015 "data_offset": 2048, 00:09:35.015 "data_size": 63488 00:09:35.015 } 00:09:35.015 ] 00:09:35.015 }' 00:09:35.015 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.015 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.275 [2024-11-16 18:50:18.668465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 92cfb832-0b98-47d0-893b-b92ba82347a0 '!=' 92cfb832-0b98-47d0-893b-b92ba82347a0 ']' 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68446 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68446 ']' 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68446 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.275 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68446 00:09:35.534 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.535 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.535 killing process with pid 68446 00:09:35.535 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68446' 00:09:35.535 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68446 00:09:35.535 [2024-11-16 18:50:18.753770] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.535 [2024-11-16 18:50:18.753857] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.535 [2024-11-16 18:50:18.753929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.535 [2024-11-16 18:50:18.753940] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:35.535 18:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68446 00:09:35.794 [2024-11-16 18:50:19.046005] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:36.736 18:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:36.736 00:09:36.736 real 0m7.370s 00:09:36.736 user 0m11.548s 00:09:36.736 sys 0m1.229s 00:09:36.736 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.736 18:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 ************************************ 00:09:36.736 END TEST raid_superblock_test 00:09:36.736 ************************************ 00:09:36.736 18:50:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:09:36.736 18:50:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:36.736 18:50:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.736 18:50:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:36.736 ************************************ 00:09:36.736 START TEST raid_read_error_test 00:09:36.736 ************************************ 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RNG3QaXReY 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68883 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68883 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68883 ']' 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:36.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:36.736 18:50:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.996 [2024-11-16 18:50:20.286373] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:36.996 [2024-11-16 18:50:20.286485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68883 ] 00:09:36.996 [2024-11-16 18:50:20.462024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.257 [2024-11-16 18:50:20.572328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.516 [2024-11-16 18:50:20.760442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.517 [2024-11-16 18:50:20.760509] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.777 BaseBdev1_malloc 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.777 true 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.777 [2024-11-16 18:50:21.188102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:37.777 [2024-11-16 18:50:21.188162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:37.777 [2024-11-16 18:50:21.188197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:37.777 [2024-11-16 18:50:21.188208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:37.777 [2024-11-16 18:50:21.190173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:37.777 [2024-11-16 18:50:21.190211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:37.777 BaseBdev1 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.777 BaseBdev2_malloc 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.777 true 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.777 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.038 [2024-11-16 18:50:21.252515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:38.038 [2024-11-16 18:50:21.252568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.038 [2024-11-16 18:50:21.252599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:38.038 [2024-11-16 18:50:21.252610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.038 [2024-11-16 18:50:21.254581] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.038 [2024-11-16 18:50:21.254618] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:38.038 BaseBdev2 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.038 BaseBdev3_malloc 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.038 true 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.038 [2024-11-16 18:50:21.336206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:38.038 [2024-11-16 18:50:21.336257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.038 [2024-11-16 18:50:21.336272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:38.038 [2024-11-16 18:50:21.336298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.038 [2024-11-16 18:50:21.338283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.038 [2024-11-16 18:50:21.338320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:38.038 BaseBdev3 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.038 [2024-11-16 18:50:21.348248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.038 [2024-11-16 18:50:21.349987] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:38.038 [2024-11-16 18:50:21.350075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:38.038 [2024-11-16 18:50:21.350281] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:38.038 [2024-11-16 18:50:21.350301] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.038 [2024-11-16 18:50:21.350535] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:38.038 [2024-11-16 18:50:21.350716] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:38.038 [2024-11-16 18:50:21.350734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:38.038 [2024-11-16 18:50:21.350864] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.038 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.039 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.039 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.039 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.039 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.039 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.039 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.039 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.039 "name": "raid_bdev1", 00:09:38.039 "uuid": "dc1f669b-adc4-499c-bbad-b7424cc12dbe", 00:09:38.039 "strip_size_kb": 0, 00:09:38.039 "state": "online", 00:09:38.039 "raid_level": "raid1", 00:09:38.039 "superblock": true, 00:09:38.039 "num_base_bdevs": 3, 00:09:38.039 "num_base_bdevs_discovered": 3, 00:09:38.039 "num_base_bdevs_operational": 3, 00:09:38.039 "base_bdevs_list": [ 00:09:38.039 { 00:09:38.039 "name": "BaseBdev1", 00:09:38.039 "uuid": "1e2c7a77-d57d-5481-a3b3-3248d14e6bbf", 00:09:38.039 "is_configured": true, 00:09:38.039 "data_offset": 2048, 00:09:38.039 "data_size": 63488 00:09:38.039 }, 00:09:38.039 { 00:09:38.039 "name": "BaseBdev2", 00:09:38.039 "uuid": "82c52948-42e7-5d22-b5e7-ade7e7e4ae9b", 00:09:38.039 "is_configured": true, 00:09:38.039 "data_offset": 2048, 00:09:38.039 "data_size": 63488 00:09:38.039 }, 00:09:38.039 { 00:09:38.039 "name": "BaseBdev3", 00:09:38.039 "uuid": "148b79d3-b82b-53b6-b92f-db6b90de1f71", 00:09:38.039 "is_configured": true, 00:09:38.039 "data_offset": 2048, 00:09:38.039 "data_size": 63488 00:09:38.039 } 00:09:38.039 ] 00:09:38.039 }' 00:09:38.039 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.039 18:50:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.298 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:38.298 18:50:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:38.559 [2024-11-16 18:50:21.780776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.497 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.497 "name": "raid_bdev1", 00:09:39.497 "uuid": "dc1f669b-adc4-499c-bbad-b7424cc12dbe", 00:09:39.497 "strip_size_kb": 0, 00:09:39.497 "state": "online", 00:09:39.497 "raid_level": "raid1", 00:09:39.497 "superblock": true, 00:09:39.497 "num_base_bdevs": 3, 00:09:39.497 "num_base_bdevs_discovered": 3, 00:09:39.497 "num_base_bdevs_operational": 3, 00:09:39.497 "base_bdevs_list": [ 00:09:39.497 { 00:09:39.497 "name": "BaseBdev1", 00:09:39.497 "uuid": "1e2c7a77-d57d-5481-a3b3-3248d14e6bbf", 00:09:39.497 "is_configured": true, 00:09:39.497 "data_offset": 2048, 00:09:39.497 "data_size": 63488 00:09:39.497 }, 00:09:39.497 { 00:09:39.497 "name": "BaseBdev2", 00:09:39.497 "uuid": "82c52948-42e7-5d22-b5e7-ade7e7e4ae9b", 00:09:39.497 "is_configured": true, 00:09:39.497 "data_offset": 2048, 00:09:39.497 "data_size": 63488 00:09:39.497 }, 00:09:39.497 { 00:09:39.497 "name": "BaseBdev3", 00:09:39.497 "uuid": "148b79d3-b82b-53b6-b92f-db6b90de1f71", 00:09:39.498 "is_configured": true, 00:09:39.498 "data_offset": 2048, 00:09:39.498 "data_size": 63488 00:09:39.498 } 00:09:39.498 ] 00:09:39.498 }' 00:09:39.498 18:50:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.498 18:50:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.757 [2024-11-16 18:50:23.103559] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.757 [2024-11-16 18:50:23.103595] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.757 [2024-11-16 18:50:23.106149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.757 [2024-11-16 18:50:23.106199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.757 [2024-11-16 18:50:23.106318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.757 [2024-11-16 18:50:23.106332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:39.757 { 00:09:39.757 "results": [ 00:09:39.757 { 00:09:39.757 "job": "raid_bdev1", 00:09:39.757 "core_mask": "0x1", 00:09:39.757 "workload": "randrw", 00:09:39.757 "percentage": 50, 00:09:39.757 "status": "finished", 00:09:39.757 "queue_depth": 1, 00:09:39.757 "io_size": 131072, 00:09:39.757 "runtime": 1.32361, 00:09:39.757 "iops": 14023.76833055054, 00:09:39.757 "mibps": 1752.9710413188175, 00:09:39.757 "io_failed": 0, 00:09:39.757 "io_timeout": 0, 00:09:39.757 "avg_latency_us": 68.87828173161208, 00:09:39.757 "min_latency_us": 21.910917030567685, 00:09:39.757 "max_latency_us": 1423.7624454148472 00:09:39.757 } 00:09:39.757 ], 00:09:39.757 "core_count": 1 00:09:39.757 } 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68883 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68883 ']' 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68883 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68883 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:39.757 killing process with pid 68883 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68883' 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68883 00:09:39.757 [2024-11-16 18:50:23.155021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:39.757 18:50:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68883 00:09:40.017 [2024-11-16 18:50:23.372041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RNG3QaXReY 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:41.398 00:09:41.398 real 0m4.314s 00:09:41.398 user 0m5.028s 00:09:41.398 sys 0m0.553s 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.398 18:50:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.398 ************************************ 00:09:41.398 END TEST raid_read_error_test 00:09:41.398 ************************************ 00:09:41.398 18:50:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:09:41.398 18:50:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:41.398 18:50:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.398 18:50:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.398 ************************************ 00:09:41.398 START TEST raid_write_error_test 00:09:41.398 ************************************ 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PYxxeV4x1B 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69031 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69031 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69031 ']' 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.398 18:50:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.398 [2024-11-16 18:50:24.672746] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:41.398 [2024-11-16 18:50:24.673280] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69031 ] 00:09:41.398 [2024-11-16 18:50:24.840630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:41.658 [2024-11-16 18:50:24.952320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:41.925 [2024-11-16 18:50:25.141751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:41.925 [2024-11-16 18:50:25.141813] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.185 BaseBdev1_malloc 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.185 true 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.185 [2024-11-16 18:50:25.576122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:42.185 [2024-11-16 18:50:25.576352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.185 [2024-11-16 18:50:25.576416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:42.185 [2024-11-16 18:50:25.576468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.185 [2024-11-16 18:50:25.578566] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.185 [2024-11-16 18:50:25.578702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:42.185 BaseBdev1 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.185 BaseBdev2_malloc 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.185 true 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.185 [2024-11-16 18:50:25.642120] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:42.185 [2024-11-16 18:50:25.642288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.185 [2024-11-16 18:50:25.642340] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:42.185 [2024-11-16 18:50:25.642388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.185 [2024-11-16 18:50:25.644496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.185 [2024-11-16 18:50:25.644607] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:42.185 BaseBdev2 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.185 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.445 BaseBdev3_malloc 00:09:42.445 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.445 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:42.445 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.446 true 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.446 [2024-11-16 18:50:25.718758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:42.446 [2024-11-16 18:50:25.719159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.446 [2024-11-16 18:50:25.719225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:42.446 [2024-11-16 18:50:25.719274] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.446 [2024-11-16 18:50:25.721411] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.446 [2024-11-16 18:50:25.721557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:42.446 BaseBdev3 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.446 [2024-11-16 18:50:25.730810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:42.446 [2024-11-16 18:50:25.732674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:42.446 [2024-11-16 18:50:25.732790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:42.446 [2024-11-16 18:50:25.733040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:42.446 [2024-11-16 18:50:25.733090] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:42.446 [2024-11-16 18:50:25.733350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:42.446 [2024-11-16 18:50:25.733566] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:42.446 [2024-11-16 18:50:25.733615] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:42.446 [2024-11-16 18:50:25.733814] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.446 "name": "raid_bdev1", 00:09:42.446 "uuid": "6dca89e7-d044-4d31-b53c-4c7c92d1aa0d", 00:09:42.446 "strip_size_kb": 0, 00:09:42.446 "state": "online", 00:09:42.446 "raid_level": "raid1", 00:09:42.446 "superblock": true, 00:09:42.446 "num_base_bdevs": 3, 00:09:42.446 "num_base_bdevs_discovered": 3, 00:09:42.446 "num_base_bdevs_operational": 3, 00:09:42.446 "base_bdevs_list": [ 00:09:42.446 { 00:09:42.446 "name": "BaseBdev1", 00:09:42.446 "uuid": "177743c5-6a2d-5844-84c7-a0531e929eb0", 00:09:42.446 "is_configured": true, 00:09:42.446 "data_offset": 2048, 00:09:42.446 "data_size": 63488 00:09:42.446 }, 00:09:42.446 { 00:09:42.446 "name": "BaseBdev2", 00:09:42.446 "uuid": "1c9d9949-dee9-5393-b0c7-f4ded838ddb0", 00:09:42.446 "is_configured": true, 00:09:42.446 "data_offset": 2048, 00:09:42.446 "data_size": 63488 00:09:42.446 }, 00:09:42.446 { 00:09:42.446 "name": "BaseBdev3", 00:09:42.446 "uuid": "f5438ef1-3a7d-5d96-a5cc-01bfebecb157", 00:09:42.446 "is_configured": true, 00:09:42.446 "data_offset": 2048, 00:09:42.446 "data_size": 63488 00:09:42.446 } 00:09:42.446 ] 00:09:42.446 }' 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.446 18:50:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.706 18:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:42.706 18:50:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:42.965 [2024-11-16 18:50:26.211515] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.913 [2024-11-16 18:50:27.130267] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:43.913 [2024-11-16 18:50:27.130739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:43.913 [2024-11-16 18:50:27.130994] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.913 "name": "raid_bdev1", 00:09:43.913 "uuid": "6dca89e7-d044-4d31-b53c-4c7c92d1aa0d", 00:09:43.913 "strip_size_kb": 0, 00:09:43.913 "state": "online", 00:09:43.913 "raid_level": "raid1", 00:09:43.913 "superblock": true, 00:09:43.913 "num_base_bdevs": 3, 00:09:43.913 "num_base_bdevs_discovered": 2, 00:09:43.913 "num_base_bdevs_operational": 2, 00:09:43.913 "base_bdevs_list": [ 00:09:43.913 { 00:09:43.913 "name": null, 00:09:43.913 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.913 "is_configured": false, 00:09:43.913 "data_offset": 0, 00:09:43.913 "data_size": 63488 00:09:43.913 }, 00:09:43.913 { 00:09:43.913 "name": "BaseBdev2", 00:09:43.913 "uuid": "1c9d9949-dee9-5393-b0c7-f4ded838ddb0", 00:09:43.913 "is_configured": true, 00:09:43.913 "data_offset": 2048, 00:09:43.913 "data_size": 63488 00:09:43.913 }, 00:09:43.913 { 00:09:43.913 "name": "BaseBdev3", 00:09:43.913 "uuid": "f5438ef1-3a7d-5d96-a5cc-01bfebecb157", 00:09:43.913 "is_configured": true, 00:09:43.913 "data_offset": 2048, 00:09:43.913 "data_size": 63488 00:09:43.913 } 00:09:43.913 ] 00:09:43.913 }' 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.913 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.185 [2024-11-16 18:50:27.572452] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:44.185 [2024-11-16 18:50:27.572487] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.185 [2024-11-16 18:50:27.575320] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.185 [2024-11-16 18:50:27.575437] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.185 [2024-11-16 18:50:27.575551] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:44.185 [2024-11-16 18:50:27.575600] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.185 { 00:09:44.185 "results": [ 00:09:44.185 { 00:09:44.185 "job": "raid_bdev1", 00:09:44.185 "core_mask": "0x1", 00:09:44.185 "workload": "randrw", 00:09:44.185 "percentage": 50, 00:09:44.185 "status": "finished", 00:09:44.185 "queue_depth": 1, 00:09:44.185 "io_size": 131072, 00:09:44.185 "runtime": 1.361687, 00:09:44.185 "iops": 15040.901470014769, 00:09:44.185 "mibps": 1880.1126837518461, 00:09:44.185 "io_failed": 0, 00:09:44.185 "io_timeout": 0, 00:09:44.185 "avg_latency_us": 63.914174496375274, 00:09:44.185 "min_latency_us": 23.923144104803495, 00:09:44.185 "max_latency_us": 1423.7624454148472 00:09:44.185 } 00:09:44.185 ], 00:09:44.185 "core_count": 1 00:09:44.185 } 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69031 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69031 ']' 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69031 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69031 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69031' 00:09:44.185 killing process with pid 69031 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69031 00:09:44.185 [2024-11-16 18:50:27.625275] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:44.185 18:50:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69031 00:09:44.445 [2024-11-16 18:50:27.852531] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PYxxeV4x1B 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:45.826 ************************************ 00:09:45.826 END TEST raid_write_error_test 00:09:45.826 ************************************ 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:45.826 00:09:45.826 real 0m4.427s 00:09:45.826 user 0m5.220s 00:09:45.826 sys 0m0.554s 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:45.826 18:50:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.826 18:50:29 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:45.826 18:50:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:45.826 18:50:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:09:45.826 18:50:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:45.826 18:50:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.826 18:50:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:45.826 ************************************ 00:09:45.826 START TEST raid_state_function_test 00:09:45.826 ************************************ 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69174 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69174' 00:09:45.826 Process raid pid: 69174 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69174 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69174 ']' 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.826 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.826 [2024-11-16 18:50:29.156157] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:45.826 [2024-11-16 18:50:29.156349] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.085 [2024-11-16 18:50:29.332611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:46.085 [2024-11-16 18:50:29.447345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.344 [2024-11-16 18:50:29.655171] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.344 [2024-11-16 18:50:29.655286] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:46.605 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.605 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:46.605 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:46.605 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.606 [2024-11-16 18:50:29.980215] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:46.606 [2024-11-16 18:50:29.980323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:46.606 [2024-11-16 18:50:29.980353] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:46.606 [2024-11-16 18:50:29.980376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:46.606 [2024-11-16 18:50:29.980394] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:46.606 [2024-11-16 18:50:29.980414] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:46.606 [2024-11-16 18:50:29.980447] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:46.606 [2024-11-16 18:50:29.980494] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.606 18:50:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.606 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.606 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.606 "name": "Existed_Raid", 00:09:46.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.606 "strip_size_kb": 64, 00:09:46.606 "state": "configuring", 00:09:46.606 "raid_level": "raid0", 00:09:46.606 "superblock": false, 00:09:46.606 "num_base_bdevs": 4, 00:09:46.606 "num_base_bdevs_discovered": 0, 00:09:46.606 "num_base_bdevs_operational": 4, 00:09:46.606 "base_bdevs_list": [ 00:09:46.606 { 00:09:46.606 "name": "BaseBdev1", 00:09:46.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.606 "is_configured": false, 00:09:46.606 "data_offset": 0, 00:09:46.606 "data_size": 0 00:09:46.606 }, 00:09:46.606 { 00:09:46.606 "name": "BaseBdev2", 00:09:46.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.606 "is_configured": false, 00:09:46.606 "data_offset": 0, 00:09:46.606 "data_size": 0 00:09:46.606 }, 00:09:46.606 { 00:09:46.606 "name": "BaseBdev3", 00:09:46.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.606 "is_configured": false, 00:09:46.606 "data_offset": 0, 00:09:46.606 "data_size": 0 00:09:46.606 }, 00:09:46.606 { 00:09:46.606 "name": "BaseBdev4", 00:09:46.606 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.606 "is_configured": false, 00:09:46.606 "data_offset": 0, 00:09:46.606 "data_size": 0 00:09:46.606 } 00:09:46.606 ] 00:09:46.606 }' 00:09:46.606 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.606 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.176 [2024-11-16 18:50:30.371516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.176 [2024-11-16 18:50:30.371558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.176 [2024-11-16 18:50:30.379496] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.176 [2024-11-16 18:50:30.379539] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.176 [2024-11-16 18:50:30.379548] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.176 [2024-11-16 18:50:30.379557] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.176 [2024-11-16 18:50:30.379563] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.176 [2024-11-16 18:50:30.379573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.176 [2024-11-16 18:50:30.379578] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.176 [2024-11-16 18:50:30.379586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.176 [2024-11-16 18:50:30.423022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.176 BaseBdev1 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.176 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.177 [ 00:09:47.177 { 00:09:47.177 "name": "BaseBdev1", 00:09:47.177 "aliases": [ 00:09:47.177 "188510e4-3539-4978-9768-a4ffd3590e4c" 00:09:47.177 ], 00:09:47.177 "product_name": "Malloc disk", 00:09:47.177 "block_size": 512, 00:09:47.177 "num_blocks": 65536, 00:09:47.177 "uuid": "188510e4-3539-4978-9768-a4ffd3590e4c", 00:09:47.177 "assigned_rate_limits": { 00:09:47.177 "rw_ios_per_sec": 0, 00:09:47.177 "rw_mbytes_per_sec": 0, 00:09:47.177 "r_mbytes_per_sec": 0, 00:09:47.177 "w_mbytes_per_sec": 0 00:09:47.177 }, 00:09:47.177 "claimed": true, 00:09:47.177 "claim_type": "exclusive_write", 00:09:47.177 "zoned": false, 00:09:47.177 "supported_io_types": { 00:09:47.177 "read": true, 00:09:47.177 "write": true, 00:09:47.177 "unmap": true, 00:09:47.177 "flush": true, 00:09:47.177 "reset": true, 00:09:47.177 "nvme_admin": false, 00:09:47.177 "nvme_io": false, 00:09:47.177 "nvme_io_md": false, 00:09:47.177 "write_zeroes": true, 00:09:47.177 "zcopy": true, 00:09:47.177 "get_zone_info": false, 00:09:47.177 "zone_management": false, 00:09:47.177 "zone_append": false, 00:09:47.177 "compare": false, 00:09:47.177 "compare_and_write": false, 00:09:47.177 "abort": true, 00:09:47.177 "seek_hole": false, 00:09:47.177 "seek_data": false, 00:09:47.177 "copy": true, 00:09:47.177 "nvme_iov_md": false 00:09:47.177 }, 00:09:47.177 "memory_domains": [ 00:09:47.177 { 00:09:47.177 "dma_device_id": "system", 00:09:47.177 "dma_device_type": 1 00:09:47.177 }, 00:09:47.177 { 00:09:47.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.177 "dma_device_type": 2 00:09:47.177 } 00:09:47.177 ], 00:09:47.177 "driver_specific": {} 00:09:47.177 } 00:09:47.177 ] 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.177 "name": "Existed_Raid", 00:09:47.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.177 "strip_size_kb": 64, 00:09:47.177 "state": "configuring", 00:09:47.177 "raid_level": "raid0", 00:09:47.177 "superblock": false, 00:09:47.177 "num_base_bdevs": 4, 00:09:47.177 "num_base_bdevs_discovered": 1, 00:09:47.177 "num_base_bdevs_operational": 4, 00:09:47.177 "base_bdevs_list": [ 00:09:47.177 { 00:09:47.177 "name": "BaseBdev1", 00:09:47.177 "uuid": "188510e4-3539-4978-9768-a4ffd3590e4c", 00:09:47.177 "is_configured": true, 00:09:47.177 "data_offset": 0, 00:09:47.177 "data_size": 65536 00:09:47.177 }, 00:09:47.177 { 00:09:47.177 "name": "BaseBdev2", 00:09:47.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.177 "is_configured": false, 00:09:47.177 "data_offset": 0, 00:09:47.177 "data_size": 0 00:09:47.177 }, 00:09:47.177 { 00:09:47.177 "name": "BaseBdev3", 00:09:47.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.177 "is_configured": false, 00:09:47.177 "data_offset": 0, 00:09:47.177 "data_size": 0 00:09:47.177 }, 00:09:47.177 { 00:09:47.177 "name": "BaseBdev4", 00:09:47.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.177 "is_configured": false, 00:09:47.177 "data_offset": 0, 00:09:47.177 "data_size": 0 00:09:47.177 } 00:09:47.177 ] 00:09:47.177 }' 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.177 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.438 [2024-11-16 18:50:30.862310] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:47.438 [2024-11-16 18:50:30.862432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.438 [2024-11-16 18:50:30.874325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:47.438 [2024-11-16 18:50:30.876109] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:47.438 [2024-11-16 18:50:30.876152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:47.438 [2024-11-16 18:50:30.876163] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:47.438 [2024-11-16 18:50:30.876173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:47.438 [2024-11-16 18:50:30.876179] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:47.438 [2024-11-16 18:50:30.876188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.438 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.698 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.698 "name": "Existed_Raid", 00:09:47.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.698 "strip_size_kb": 64, 00:09:47.698 "state": "configuring", 00:09:47.698 "raid_level": "raid0", 00:09:47.698 "superblock": false, 00:09:47.698 "num_base_bdevs": 4, 00:09:47.698 "num_base_bdevs_discovered": 1, 00:09:47.698 "num_base_bdevs_operational": 4, 00:09:47.698 "base_bdevs_list": [ 00:09:47.698 { 00:09:47.698 "name": "BaseBdev1", 00:09:47.698 "uuid": "188510e4-3539-4978-9768-a4ffd3590e4c", 00:09:47.698 "is_configured": true, 00:09:47.698 "data_offset": 0, 00:09:47.698 "data_size": 65536 00:09:47.698 }, 00:09:47.698 { 00:09:47.698 "name": "BaseBdev2", 00:09:47.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.698 "is_configured": false, 00:09:47.698 "data_offset": 0, 00:09:47.698 "data_size": 0 00:09:47.698 }, 00:09:47.698 { 00:09:47.698 "name": "BaseBdev3", 00:09:47.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.698 "is_configured": false, 00:09:47.698 "data_offset": 0, 00:09:47.698 "data_size": 0 00:09:47.698 }, 00:09:47.698 { 00:09:47.698 "name": "BaseBdev4", 00:09:47.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.698 "is_configured": false, 00:09:47.698 "data_offset": 0, 00:09:47.698 "data_size": 0 00:09:47.698 } 00:09:47.698 ] 00:09:47.698 }' 00:09:47.698 18:50:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.698 18:50:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.958 [2024-11-16 18:50:31.325771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.958 BaseBdev2 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.958 [ 00:09:47.958 { 00:09:47.958 "name": "BaseBdev2", 00:09:47.958 "aliases": [ 00:09:47.958 "cfde43f8-eeb7-4e0f-8fd5-bafeb656f6d6" 00:09:47.958 ], 00:09:47.958 "product_name": "Malloc disk", 00:09:47.958 "block_size": 512, 00:09:47.958 "num_blocks": 65536, 00:09:47.958 "uuid": "cfde43f8-eeb7-4e0f-8fd5-bafeb656f6d6", 00:09:47.958 "assigned_rate_limits": { 00:09:47.958 "rw_ios_per_sec": 0, 00:09:47.958 "rw_mbytes_per_sec": 0, 00:09:47.958 "r_mbytes_per_sec": 0, 00:09:47.958 "w_mbytes_per_sec": 0 00:09:47.958 }, 00:09:47.958 "claimed": true, 00:09:47.958 "claim_type": "exclusive_write", 00:09:47.958 "zoned": false, 00:09:47.958 "supported_io_types": { 00:09:47.958 "read": true, 00:09:47.958 "write": true, 00:09:47.958 "unmap": true, 00:09:47.958 "flush": true, 00:09:47.958 "reset": true, 00:09:47.958 "nvme_admin": false, 00:09:47.958 "nvme_io": false, 00:09:47.958 "nvme_io_md": false, 00:09:47.958 "write_zeroes": true, 00:09:47.958 "zcopy": true, 00:09:47.958 "get_zone_info": false, 00:09:47.958 "zone_management": false, 00:09:47.958 "zone_append": false, 00:09:47.958 "compare": false, 00:09:47.958 "compare_and_write": false, 00:09:47.958 "abort": true, 00:09:47.958 "seek_hole": false, 00:09:47.958 "seek_data": false, 00:09:47.958 "copy": true, 00:09:47.958 "nvme_iov_md": false 00:09:47.958 }, 00:09:47.958 "memory_domains": [ 00:09:47.958 { 00:09:47.958 "dma_device_id": "system", 00:09:47.958 "dma_device_type": 1 00:09:47.958 }, 00:09:47.958 { 00:09:47.958 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.958 "dma_device_type": 2 00:09:47.958 } 00:09:47.958 ], 00:09:47.958 "driver_specific": {} 00:09:47.958 } 00:09:47.958 ] 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:47.958 "name": "Existed_Raid", 00:09:47.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.958 "strip_size_kb": 64, 00:09:47.958 "state": "configuring", 00:09:47.958 "raid_level": "raid0", 00:09:47.958 "superblock": false, 00:09:47.958 "num_base_bdevs": 4, 00:09:47.958 "num_base_bdevs_discovered": 2, 00:09:47.958 "num_base_bdevs_operational": 4, 00:09:47.958 "base_bdevs_list": [ 00:09:47.958 { 00:09:47.958 "name": "BaseBdev1", 00:09:47.958 "uuid": "188510e4-3539-4978-9768-a4ffd3590e4c", 00:09:47.958 "is_configured": true, 00:09:47.958 "data_offset": 0, 00:09:47.958 "data_size": 65536 00:09:47.958 }, 00:09:47.958 { 00:09:47.958 "name": "BaseBdev2", 00:09:47.958 "uuid": "cfde43f8-eeb7-4e0f-8fd5-bafeb656f6d6", 00:09:47.958 "is_configured": true, 00:09:47.958 "data_offset": 0, 00:09:47.958 "data_size": 65536 00:09:47.958 }, 00:09:47.958 { 00:09:47.958 "name": "BaseBdev3", 00:09:47.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.958 "is_configured": false, 00:09:47.958 "data_offset": 0, 00:09:47.958 "data_size": 0 00:09:47.958 }, 00:09:47.958 { 00:09:47.958 "name": "BaseBdev4", 00:09:47.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:47.958 "is_configured": false, 00:09:47.958 "data_offset": 0, 00:09:47.958 "data_size": 0 00:09:47.958 } 00:09:47.958 ] 00:09:47.958 }' 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:47.958 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.528 [2024-11-16 18:50:31.873531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.528 BaseBdev3 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.528 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.528 [ 00:09:48.528 { 00:09:48.528 "name": "BaseBdev3", 00:09:48.528 "aliases": [ 00:09:48.528 "3f664965-b0d6-49d6-aad0-1ce29e7da768" 00:09:48.528 ], 00:09:48.528 "product_name": "Malloc disk", 00:09:48.528 "block_size": 512, 00:09:48.528 "num_blocks": 65536, 00:09:48.528 "uuid": "3f664965-b0d6-49d6-aad0-1ce29e7da768", 00:09:48.528 "assigned_rate_limits": { 00:09:48.528 "rw_ios_per_sec": 0, 00:09:48.528 "rw_mbytes_per_sec": 0, 00:09:48.528 "r_mbytes_per_sec": 0, 00:09:48.528 "w_mbytes_per_sec": 0 00:09:48.528 }, 00:09:48.528 "claimed": true, 00:09:48.528 "claim_type": "exclusive_write", 00:09:48.528 "zoned": false, 00:09:48.528 "supported_io_types": { 00:09:48.528 "read": true, 00:09:48.528 "write": true, 00:09:48.528 "unmap": true, 00:09:48.528 "flush": true, 00:09:48.528 "reset": true, 00:09:48.528 "nvme_admin": false, 00:09:48.528 "nvme_io": false, 00:09:48.528 "nvme_io_md": false, 00:09:48.528 "write_zeroes": true, 00:09:48.528 "zcopy": true, 00:09:48.528 "get_zone_info": false, 00:09:48.528 "zone_management": false, 00:09:48.528 "zone_append": false, 00:09:48.528 "compare": false, 00:09:48.528 "compare_and_write": false, 00:09:48.528 "abort": true, 00:09:48.528 "seek_hole": false, 00:09:48.528 "seek_data": false, 00:09:48.528 "copy": true, 00:09:48.529 "nvme_iov_md": false 00:09:48.529 }, 00:09:48.529 "memory_domains": [ 00:09:48.529 { 00:09:48.529 "dma_device_id": "system", 00:09:48.529 "dma_device_type": 1 00:09:48.529 }, 00:09:48.529 { 00:09:48.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.529 "dma_device_type": 2 00:09:48.529 } 00:09:48.529 ], 00:09:48.529 "driver_specific": {} 00:09:48.529 } 00:09:48.529 ] 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.529 "name": "Existed_Raid", 00:09:48.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.529 "strip_size_kb": 64, 00:09:48.529 "state": "configuring", 00:09:48.529 "raid_level": "raid0", 00:09:48.529 "superblock": false, 00:09:48.529 "num_base_bdevs": 4, 00:09:48.529 "num_base_bdevs_discovered": 3, 00:09:48.529 "num_base_bdevs_operational": 4, 00:09:48.529 "base_bdevs_list": [ 00:09:48.529 { 00:09:48.529 "name": "BaseBdev1", 00:09:48.529 "uuid": "188510e4-3539-4978-9768-a4ffd3590e4c", 00:09:48.529 "is_configured": true, 00:09:48.529 "data_offset": 0, 00:09:48.529 "data_size": 65536 00:09:48.529 }, 00:09:48.529 { 00:09:48.529 "name": "BaseBdev2", 00:09:48.529 "uuid": "cfde43f8-eeb7-4e0f-8fd5-bafeb656f6d6", 00:09:48.529 "is_configured": true, 00:09:48.529 "data_offset": 0, 00:09:48.529 "data_size": 65536 00:09:48.529 }, 00:09:48.529 { 00:09:48.529 "name": "BaseBdev3", 00:09:48.529 "uuid": "3f664965-b0d6-49d6-aad0-1ce29e7da768", 00:09:48.529 "is_configured": true, 00:09:48.529 "data_offset": 0, 00:09:48.529 "data_size": 65536 00:09:48.529 }, 00:09:48.529 { 00:09:48.529 "name": "BaseBdev4", 00:09:48.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.529 "is_configured": false, 00:09:48.529 "data_offset": 0, 00:09:48.529 "data_size": 0 00:09:48.529 } 00:09:48.529 ] 00:09:48.529 }' 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.529 18:50:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.098 [2024-11-16 18:50:32.413583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:49.098 [2024-11-16 18:50:32.413630] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.098 [2024-11-16 18:50:32.413639] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:49.098 [2024-11-16 18:50:32.414055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:49.098 [2024-11-16 18:50:32.414241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.098 [2024-11-16 18:50:32.414255] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:49.098 [2024-11-16 18:50:32.414546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.098 BaseBdev4 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:49.098 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.099 [ 00:09:49.099 { 00:09:49.099 "name": "BaseBdev4", 00:09:49.099 "aliases": [ 00:09:49.099 "56140147-3763-4b46-a29b-2d4c430e4eb3" 00:09:49.099 ], 00:09:49.099 "product_name": "Malloc disk", 00:09:49.099 "block_size": 512, 00:09:49.099 "num_blocks": 65536, 00:09:49.099 "uuid": "56140147-3763-4b46-a29b-2d4c430e4eb3", 00:09:49.099 "assigned_rate_limits": { 00:09:49.099 "rw_ios_per_sec": 0, 00:09:49.099 "rw_mbytes_per_sec": 0, 00:09:49.099 "r_mbytes_per_sec": 0, 00:09:49.099 "w_mbytes_per_sec": 0 00:09:49.099 }, 00:09:49.099 "claimed": true, 00:09:49.099 "claim_type": "exclusive_write", 00:09:49.099 "zoned": false, 00:09:49.099 "supported_io_types": { 00:09:49.099 "read": true, 00:09:49.099 "write": true, 00:09:49.099 "unmap": true, 00:09:49.099 "flush": true, 00:09:49.099 "reset": true, 00:09:49.099 "nvme_admin": false, 00:09:49.099 "nvme_io": false, 00:09:49.099 "nvme_io_md": false, 00:09:49.099 "write_zeroes": true, 00:09:49.099 "zcopy": true, 00:09:49.099 "get_zone_info": false, 00:09:49.099 "zone_management": false, 00:09:49.099 "zone_append": false, 00:09:49.099 "compare": false, 00:09:49.099 "compare_and_write": false, 00:09:49.099 "abort": true, 00:09:49.099 "seek_hole": false, 00:09:49.099 "seek_data": false, 00:09:49.099 "copy": true, 00:09:49.099 "nvme_iov_md": false 00:09:49.099 }, 00:09:49.099 "memory_domains": [ 00:09:49.099 { 00:09:49.099 "dma_device_id": "system", 00:09:49.099 "dma_device_type": 1 00:09:49.099 }, 00:09:49.099 { 00:09:49.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.099 "dma_device_type": 2 00:09:49.099 } 00:09:49.099 ], 00:09:49.099 "driver_specific": {} 00:09:49.099 } 00:09:49.099 ] 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.099 "name": "Existed_Raid", 00:09:49.099 "uuid": "04aafe02-43b5-4635-a63c-caf6b0aa6d9c", 00:09:49.099 "strip_size_kb": 64, 00:09:49.099 "state": "online", 00:09:49.099 "raid_level": "raid0", 00:09:49.099 "superblock": false, 00:09:49.099 "num_base_bdevs": 4, 00:09:49.099 "num_base_bdevs_discovered": 4, 00:09:49.099 "num_base_bdevs_operational": 4, 00:09:49.099 "base_bdevs_list": [ 00:09:49.099 { 00:09:49.099 "name": "BaseBdev1", 00:09:49.099 "uuid": "188510e4-3539-4978-9768-a4ffd3590e4c", 00:09:49.099 "is_configured": true, 00:09:49.099 "data_offset": 0, 00:09:49.099 "data_size": 65536 00:09:49.099 }, 00:09:49.099 { 00:09:49.099 "name": "BaseBdev2", 00:09:49.099 "uuid": "cfde43f8-eeb7-4e0f-8fd5-bafeb656f6d6", 00:09:49.099 "is_configured": true, 00:09:49.099 "data_offset": 0, 00:09:49.099 "data_size": 65536 00:09:49.099 }, 00:09:49.099 { 00:09:49.099 "name": "BaseBdev3", 00:09:49.099 "uuid": "3f664965-b0d6-49d6-aad0-1ce29e7da768", 00:09:49.099 "is_configured": true, 00:09:49.099 "data_offset": 0, 00:09:49.099 "data_size": 65536 00:09:49.099 }, 00:09:49.099 { 00:09:49.099 "name": "BaseBdev4", 00:09:49.099 "uuid": "56140147-3763-4b46-a29b-2d4c430e4eb3", 00:09:49.099 "is_configured": true, 00:09:49.099 "data_offset": 0, 00:09:49.099 "data_size": 65536 00:09:49.099 } 00:09:49.099 ] 00:09:49.099 }' 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.099 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.670 [2024-11-16 18:50:32.869189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:49.670 "name": "Existed_Raid", 00:09:49.670 "aliases": [ 00:09:49.670 "04aafe02-43b5-4635-a63c-caf6b0aa6d9c" 00:09:49.670 ], 00:09:49.670 "product_name": "Raid Volume", 00:09:49.670 "block_size": 512, 00:09:49.670 "num_blocks": 262144, 00:09:49.670 "uuid": "04aafe02-43b5-4635-a63c-caf6b0aa6d9c", 00:09:49.670 "assigned_rate_limits": { 00:09:49.670 "rw_ios_per_sec": 0, 00:09:49.670 "rw_mbytes_per_sec": 0, 00:09:49.670 "r_mbytes_per_sec": 0, 00:09:49.670 "w_mbytes_per_sec": 0 00:09:49.670 }, 00:09:49.670 "claimed": false, 00:09:49.670 "zoned": false, 00:09:49.670 "supported_io_types": { 00:09:49.670 "read": true, 00:09:49.670 "write": true, 00:09:49.670 "unmap": true, 00:09:49.670 "flush": true, 00:09:49.670 "reset": true, 00:09:49.670 "nvme_admin": false, 00:09:49.670 "nvme_io": false, 00:09:49.670 "nvme_io_md": false, 00:09:49.670 "write_zeroes": true, 00:09:49.670 "zcopy": false, 00:09:49.670 "get_zone_info": false, 00:09:49.670 "zone_management": false, 00:09:49.670 "zone_append": false, 00:09:49.670 "compare": false, 00:09:49.670 "compare_and_write": false, 00:09:49.670 "abort": false, 00:09:49.670 "seek_hole": false, 00:09:49.670 "seek_data": false, 00:09:49.670 "copy": false, 00:09:49.670 "nvme_iov_md": false 00:09:49.670 }, 00:09:49.670 "memory_domains": [ 00:09:49.670 { 00:09:49.670 "dma_device_id": "system", 00:09:49.670 "dma_device_type": 1 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.670 "dma_device_type": 2 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "dma_device_id": "system", 00:09:49.670 "dma_device_type": 1 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.670 "dma_device_type": 2 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "dma_device_id": "system", 00:09:49.670 "dma_device_type": 1 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.670 "dma_device_type": 2 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "dma_device_id": "system", 00:09:49.670 "dma_device_type": 1 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.670 "dma_device_type": 2 00:09:49.670 } 00:09:49.670 ], 00:09:49.670 "driver_specific": { 00:09:49.670 "raid": { 00:09:49.670 "uuid": "04aafe02-43b5-4635-a63c-caf6b0aa6d9c", 00:09:49.670 "strip_size_kb": 64, 00:09:49.670 "state": "online", 00:09:49.670 "raid_level": "raid0", 00:09:49.670 "superblock": false, 00:09:49.670 "num_base_bdevs": 4, 00:09:49.670 "num_base_bdevs_discovered": 4, 00:09:49.670 "num_base_bdevs_operational": 4, 00:09:49.670 "base_bdevs_list": [ 00:09:49.670 { 00:09:49.670 "name": "BaseBdev1", 00:09:49.670 "uuid": "188510e4-3539-4978-9768-a4ffd3590e4c", 00:09:49.670 "is_configured": true, 00:09:49.670 "data_offset": 0, 00:09:49.670 "data_size": 65536 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "name": "BaseBdev2", 00:09:49.670 "uuid": "cfde43f8-eeb7-4e0f-8fd5-bafeb656f6d6", 00:09:49.670 "is_configured": true, 00:09:49.670 "data_offset": 0, 00:09:49.670 "data_size": 65536 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "name": "BaseBdev3", 00:09:49.670 "uuid": "3f664965-b0d6-49d6-aad0-1ce29e7da768", 00:09:49.670 "is_configured": true, 00:09:49.670 "data_offset": 0, 00:09:49.670 "data_size": 65536 00:09:49.670 }, 00:09:49.670 { 00:09:49.670 "name": "BaseBdev4", 00:09:49.670 "uuid": "56140147-3763-4b46-a29b-2d4c430e4eb3", 00:09:49.670 "is_configured": true, 00:09:49.670 "data_offset": 0, 00:09:49.670 "data_size": 65536 00:09:49.670 } 00:09:49.670 ] 00:09:49.670 } 00:09:49.670 } 00:09:49.670 }' 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:49.670 BaseBdev2 00:09:49.670 BaseBdev3 00:09:49.670 BaseBdev4' 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.670 18:50:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.670 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.670 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.670 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.670 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.670 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.671 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.931 [2024-11-16 18:50:33.208323] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:49.931 [2024-11-16 18:50:33.208354] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.931 [2024-11-16 18:50:33.208406] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:49.931 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.932 "name": "Existed_Raid", 00:09:49.932 "uuid": "04aafe02-43b5-4635-a63c-caf6b0aa6d9c", 00:09:49.932 "strip_size_kb": 64, 00:09:49.932 "state": "offline", 00:09:49.932 "raid_level": "raid0", 00:09:49.932 "superblock": false, 00:09:49.932 "num_base_bdevs": 4, 00:09:49.932 "num_base_bdevs_discovered": 3, 00:09:49.932 "num_base_bdevs_operational": 3, 00:09:49.932 "base_bdevs_list": [ 00:09:49.932 { 00:09:49.932 "name": null, 00:09:49.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.932 "is_configured": false, 00:09:49.932 "data_offset": 0, 00:09:49.932 "data_size": 65536 00:09:49.932 }, 00:09:49.932 { 00:09:49.932 "name": "BaseBdev2", 00:09:49.932 "uuid": "cfde43f8-eeb7-4e0f-8fd5-bafeb656f6d6", 00:09:49.932 "is_configured": true, 00:09:49.932 "data_offset": 0, 00:09:49.932 "data_size": 65536 00:09:49.932 }, 00:09:49.932 { 00:09:49.932 "name": "BaseBdev3", 00:09:49.932 "uuid": "3f664965-b0d6-49d6-aad0-1ce29e7da768", 00:09:49.932 "is_configured": true, 00:09:49.932 "data_offset": 0, 00:09:49.932 "data_size": 65536 00:09:49.932 }, 00:09:49.932 { 00:09:49.932 "name": "BaseBdev4", 00:09:49.932 "uuid": "56140147-3763-4b46-a29b-2d4c430e4eb3", 00:09:49.932 "is_configured": true, 00:09:49.932 "data_offset": 0, 00:09:49.932 "data_size": 65536 00:09:49.932 } 00:09:49.932 ] 00:09:49.932 }' 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.932 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.502 [2024-11-16 18:50:33.731325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.502 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.502 [2024-11-16 18:50:33.884903] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:50.762 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.762 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.762 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.762 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.762 18:50:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:50.762 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.762 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.762 18:50:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.762 [2024-11-16 18:50:34.033457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:09:50.762 [2024-11-16 18:50:34.033506] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.762 BaseBdev2 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:50.762 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:50.763 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:50.763 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.023 [ 00:09:51.023 { 00:09:51.023 "name": "BaseBdev2", 00:09:51.023 "aliases": [ 00:09:51.023 "65d81560-05c1-44cb-b336-ccaea4254ab9" 00:09:51.023 ], 00:09:51.023 "product_name": "Malloc disk", 00:09:51.023 "block_size": 512, 00:09:51.023 "num_blocks": 65536, 00:09:51.023 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:51.023 "assigned_rate_limits": { 00:09:51.023 "rw_ios_per_sec": 0, 00:09:51.023 "rw_mbytes_per_sec": 0, 00:09:51.023 "r_mbytes_per_sec": 0, 00:09:51.023 "w_mbytes_per_sec": 0 00:09:51.023 }, 00:09:51.023 "claimed": false, 00:09:51.023 "zoned": false, 00:09:51.023 "supported_io_types": { 00:09:51.023 "read": true, 00:09:51.023 "write": true, 00:09:51.023 "unmap": true, 00:09:51.023 "flush": true, 00:09:51.023 "reset": true, 00:09:51.023 "nvme_admin": false, 00:09:51.023 "nvme_io": false, 00:09:51.023 "nvme_io_md": false, 00:09:51.023 "write_zeroes": true, 00:09:51.023 "zcopy": true, 00:09:51.023 "get_zone_info": false, 00:09:51.023 "zone_management": false, 00:09:51.023 "zone_append": false, 00:09:51.023 "compare": false, 00:09:51.023 "compare_and_write": false, 00:09:51.023 "abort": true, 00:09:51.023 "seek_hole": false, 00:09:51.023 "seek_data": false, 00:09:51.023 "copy": true, 00:09:51.023 "nvme_iov_md": false 00:09:51.023 }, 00:09:51.023 "memory_domains": [ 00:09:51.023 { 00:09:51.023 "dma_device_id": "system", 00:09:51.023 "dma_device_type": 1 00:09:51.023 }, 00:09:51.023 { 00:09:51.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.023 "dma_device_type": 2 00:09:51.023 } 00:09:51.023 ], 00:09:51.023 "driver_specific": {} 00:09:51.023 } 00:09:51.023 ] 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.023 BaseBdev3 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.023 [ 00:09:51.023 { 00:09:51.023 "name": "BaseBdev3", 00:09:51.023 "aliases": [ 00:09:51.023 "ee1032a5-0ccf-44e5-912e-54293c4007e1" 00:09:51.023 ], 00:09:51.023 "product_name": "Malloc disk", 00:09:51.023 "block_size": 512, 00:09:51.023 "num_blocks": 65536, 00:09:51.023 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:51.023 "assigned_rate_limits": { 00:09:51.023 "rw_ios_per_sec": 0, 00:09:51.023 "rw_mbytes_per_sec": 0, 00:09:51.023 "r_mbytes_per_sec": 0, 00:09:51.023 "w_mbytes_per_sec": 0 00:09:51.023 }, 00:09:51.023 "claimed": false, 00:09:51.023 "zoned": false, 00:09:51.023 "supported_io_types": { 00:09:51.023 "read": true, 00:09:51.023 "write": true, 00:09:51.023 "unmap": true, 00:09:51.023 "flush": true, 00:09:51.023 "reset": true, 00:09:51.023 "nvme_admin": false, 00:09:51.023 "nvme_io": false, 00:09:51.023 "nvme_io_md": false, 00:09:51.023 "write_zeroes": true, 00:09:51.023 "zcopy": true, 00:09:51.023 "get_zone_info": false, 00:09:51.023 "zone_management": false, 00:09:51.023 "zone_append": false, 00:09:51.023 "compare": false, 00:09:51.023 "compare_and_write": false, 00:09:51.023 "abort": true, 00:09:51.023 "seek_hole": false, 00:09:51.023 "seek_data": false, 00:09:51.023 "copy": true, 00:09:51.023 "nvme_iov_md": false 00:09:51.023 }, 00:09:51.023 "memory_domains": [ 00:09:51.023 { 00:09:51.023 "dma_device_id": "system", 00:09:51.023 "dma_device_type": 1 00:09:51.023 }, 00:09:51.023 { 00:09:51.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.023 "dma_device_type": 2 00:09:51.023 } 00:09:51.023 ], 00:09:51.023 "driver_specific": {} 00:09:51.023 } 00:09:51.023 ] 00:09:51.023 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.024 BaseBdev4 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.024 [ 00:09:51.024 { 00:09:51.024 "name": "BaseBdev4", 00:09:51.024 "aliases": [ 00:09:51.024 "de841c9e-0b34-4d25-9144-795227f59399" 00:09:51.024 ], 00:09:51.024 "product_name": "Malloc disk", 00:09:51.024 "block_size": 512, 00:09:51.024 "num_blocks": 65536, 00:09:51.024 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:51.024 "assigned_rate_limits": { 00:09:51.024 "rw_ios_per_sec": 0, 00:09:51.024 "rw_mbytes_per_sec": 0, 00:09:51.024 "r_mbytes_per_sec": 0, 00:09:51.024 "w_mbytes_per_sec": 0 00:09:51.024 }, 00:09:51.024 "claimed": false, 00:09:51.024 "zoned": false, 00:09:51.024 "supported_io_types": { 00:09:51.024 "read": true, 00:09:51.024 "write": true, 00:09:51.024 "unmap": true, 00:09:51.024 "flush": true, 00:09:51.024 "reset": true, 00:09:51.024 "nvme_admin": false, 00:09:51.024 "nvme_io": false, 00:09:51.024 "nvme_io_md": false, 00:09:51.024 "write_zeroes": true, 00:09:51.024 "zcopy": true, 00:09:51.024 "get_zone_info": false, 00:09:51.024 "zone_management": false, 00:09:51.024 "zone_append": false, 00:09:51.024 "compare": false, 00:09:51.024 "compare_and_write": false, 00:09:51.024 "abort": true, 00:09:51.024 "seek_hole": false, 00:09:51.024 "seek_data": false, 00:09:51.024 "copy": true, 00:09:51.024 "nvme_iov_md": false 00:09:51.024 }, 00:09:51.024 "memory_domains": [ 00:09:51.024 { 00:09:51.024 "dma_device_id": "system", 00:09:51.024 "dma_device_type": 1 00:09:51.024 }, 00:09:51.024 { 00:09:51.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.024 "dma_device_type": 2 00:09:51.024 } 00:09:51.024 ], 00:09:51.024 "driver_specific": {} 00:09:51.024 } 00:09:51.024 ] 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.024 [2024-11-16 18:50:34.417788] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:51.024 [2024-11-16 18:50:34.417871] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:51.024 [2024-11-16 18:50:34.417915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:51.024 [2024-11-16 18:50:34.419811] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:51.024 [2024-11-16 18:50:34.419913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.024 "name": "Existed_Raid", 00:09:51.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.024 "strip_size_kb": 64, 00:09:51.024 "state": "configuring", 00:09:51.024 "raid_level": "raid0", 00:09:51.024 "superblock": false, 00:09:51.024 "num_base_bdevs": 4, 00:09:51.024 "num_base_bdevs_discovered": 3, 00:09:51.024 "num_base_bdevs_operational": 4, 00:09:51.024 "base_bdevs_list": [ 00:09:51.024 { 00:09:51.024 "name": "BaseBdev1", 00:09:51.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.024 "is_configured": false, 00:09:51.024 "data_offset": 0, 00:09:51.024 "data_size": 0 00:09:51.024 }, 00:09:51.024 { 00:09:51.024 "name": "BaseBdev2", 00:09:51.024 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:51.024 "is_configured": true, 00:09:51.024 "data_offset": 0, 00:09:51.024 "data_size": 65536 00:09:51.024 }, 00:09:51.024 { 00:09:51.024 "name": "BaseBdev3", 00:09:51.024 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:51.024 "is_configured": true, 00:09:51.024 "data_offset": 0, 00:09:51.024 "data_size": 65536 00:09:51.024 }, 00:09:51.024 { 00:09:51.024 "name": "BaseBdev4", 00:09:51.024 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:51.024 "is_configured": true, 00:09:51.024 "data_offset": 0, 00:09:51.024 "data_size": 65536 00:09:51.024 } 00:09:51.024 ] 00:09:51.024 }' 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.024 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.610 [2024-11-16 18:50:34.885001] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.610 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.610 "name": "Existed_Raid", 00:09:51.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.610 "strip_size_kb": 64, 00:09:51.610 "state": "configuring", 00:09:51.610 "raid_level": "raid0", 00:09:51.610 "superblock": false, 00:09:51.610 "num_base_bdevs": 4, 00:09:51.610 "num_base_bdevs_discovered": 2, 00:09:51.610 "num_base_bdevs_operational": 4, 00:09:51.610 "base_bdevs_list": [ 00:09:51.610 { 00:09:51.610 "name": "BaseBdev1", 00:09:51.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.610 "is_configured": false, 00:09:51.610 "data_offset": 0, 00:09:51.610 "data_size": 0 00:09:51.610 }, 00:09:51.610 { 00:09:51.610 "name": null, 00:09:51.610 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:51.610 "is_configured": false, 00:09:51.610 "data_offset": 0, 00:09:51.611 "data_size": 65536 00:09:51.611 }, 00:09:51.611 { 00:09:51.611 "name": "BaseBdev3", 00:09:51.611 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:51.611 "is_configured": true, 00:09:51.611 "data_offset": 0, 00:09:51.611 "data_size": 65536 00:09:51.611 }, 00:09:51.611 { 00:09:51.611 "name": "BaseBdev4", 00:09:51.611 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:51.611 "is_configured": true, 00:09:51.611 "data_offset": 0, 00:09:51.611 "data_size": 65536 00:09:51.611 } 00:09:51.611 ] 00:09:51.611 }' 00:09:51.611 18:50:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.611 18:50:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.870 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.870 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:51.870 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.870 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.870 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.870 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:51.870 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:51.870 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.870 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.129 [2024-11-16 18:50:35.376267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.129 BaseBdev1 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.129 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.129 [ 00:09:52.129 { 00:09:52.129 "name": "BaseBdev1", 00:09:52.129 "aliases": [ 00:09:52.129 "ab724edf-59d7-40a5-95a9-e34fd07616eb" 00:09:52.129 ], 00:09:52.129 "product_name": "Malloc disk", 00:09:52.129 "block_size": 512, 00:09:52.129 "num_blocks": 65536, 00:09:52.129 "uuid": "ab724edf-59d7-40a5-95a9-e34fd07616eb", 00:09:52.129 "assigned_rate_limits": { 00:09:52.129 "rw_ios_per_sec": 0, 00:09:52.129 "rw_mbytes_per_sec": 0, 00:09:52.129 "r_mbytes_per_sec": 0, 00:09:52.129 "w_mbytes_per_sec": 0 00:09:52.129 }, 00:09:52.129 "claimed": true, 00:09:52.129 "claim_type": "exclusive_write", 00:09:52.129 "zoned": false, 00:09:52.129 "supported_io_types": { 00:09:52.129 "read": true, 00:09:52.129 "write": true, 00:09:52.129 "unmap": true, 00:09:52.129 "flush": true, 00:09:52.129 "reset": true, 00:09:52.129 "nvme_admin": false, 00:09:52.129 "nvme_io": false, 00:09:52.129 "nvme_io_md": false, 00:09:52.129 "write_zeroes": true, 00:09:52.129 "zcopy": true, 00:09:52.129 "get_zone_info": false, 00:09:52.129 "zone_management": false, 00:09:52.129 "zone_append": false, 00:09:52.129 "compare": false, 00:09:52.129 "compare_and_write": false, 00:09:52.129 "abort": true, 00:09:52.129 "seek_hole": false, 00:09:52.130 "seek_data": false, 00:09:52.130 "copy": true, 00:09:52.130 "nvme_iov_md": false 00:09:52.130 }, 00:09:52.130 "memory_domains": [ 00:09:52.130 { 00:09:52.130 "dma_device_id": "system", 00:09:52.130 "dma_device_type": 1 00:09:52.130 }, 00:09:52.130 { 00:09:52.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.130 "dma_device_type": 2 00:09:52.130 } 00:09:52.130 ], 00:09:52.130 "driver_specific": {} 00:09:52.130 } 00:09:52.130 ] 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.130 "name": "Existed_Raid", 00:09:52.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.130 "strip_size_kb": 64, 00:09:52.130 "state": "configuring", 00:09:52.130 "raid_level": "raid0", 00:09:52.130 "superblock": false, 00:09:52.130 "num_base_bdevs": 4, 00:09:52.130 "num_base_bdevs_discovered": 3, 00:09:52.130 "num_base_bdevs_operational": 4, 00:09:52.130 "base_bdevs_list": [ 00:09:52.130 { 00:09:52.130 "name": "BaseBdev1", 00:09:52.130 "uuid": "ab724edf-59d7-40a5-95a9-e34fd07616eb", 00:09:52.130 "is_configured": true, 00:09:52.130 "data_offset": 0, 00:09:52.130 "data_size": 65536 00:09:52.130 }, 00:09:52.130 { 00:09:52.130 "name": null, 00:09:52.130 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:52.130 "is_configured": false, 00:09:52.130 "data_offset": 0, 00:09:52.130 "data_size": 65536 00:09:52.130 }, 00:09:52.130 { 00:09:52.130 "name": "BaseBdev3", 00:09:52.130 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:52.130 "is_configured": true, 00:09:52.130 "data_offset": 0, 00:09:52.130 "data_size": 65536 00:09:52.130 }, 00:09:52.130 { 00:09:52.130 "name": "BaseBdev4", 00:09:52.130 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:52.130 "is_configured": true, 00:09:52.130 "data_offset": 0, 00:09:52.130 "data_size": 65536 00:09:52.130 } 00:09:52.130 ] 00:09:52.130 }' 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.130 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.389 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.389 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:52.389 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.389 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.650 [2024-11-16 18:50:35.887532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.650 "name": "Existed_Raid", 00:09:52.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:52.650 "strip_size_kb": 64, 00:09:52.650 "state": "configuring", 00:09:52.650 "raid_level": "raid0", 00:09:52.650 "superblock": false, 00:09:52.650 "num_base_bdevs": 4, 00:09:52.650 "num_base_bdevs_discovered": 2, 00:09:52.650 "num_base_bdevs_operational": 4, 00:09:52.650 "base_bdevs_list": [ 00:09:52.650 { 00:09:52.650 "name": "BaseBdev1", 00:09:52.650 "uuid": "ab724edf-59d7-40a5-95a9-e34fd07616eb", 00:09:52.650 "is_configured": true, 00:09:52.650 "data_offset": 0, 00:09:52.650 "data_size": 65536 00:09:52.650 }, 00:09:52.650 { 00:09:52.650 "name": null, 00:09:52.650 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:52.650 "is_configured": false, 00:09:52.650 "data_offset": 0, 00:09:52.650 "data_size": 65536 00:09:52.650 }, 00:09:52.650 { 00:09:52.650 "name": null, 00:09:52.650 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:52.650 "is_configured": false, 00:09:52.650 "data_offset": 0, 00:09:52.650 "data_size": 65536 00:09:52.650 }, 00:09:52.650 { 00:09:52.650 "name": "BaseBdev4", 00:09:52.650 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:52.650 "is_configured": true, 00:09:52.650 "data_offset": 0, 00:09:52.650 "data_size": 65536 00:09:52.650 } 00:09:52.650 ] 00:09:52.650 }' 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.650 18:50:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.910 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.910 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.910 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.910 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.910 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.910 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:52.910 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:52.910 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.910 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.170 [2024-11-16 18:50:36.386670] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.170 "name": "Existed_Raid", 00:09:53.170 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.170 "strip_size_kb": 64, 00:09:53.170 "state": "configuring", 00:09:53.170 "raid_level": "raid0", 00:09:53.170 "superblock": false, 00:09:53.170 "num_base_bdevs": 4, 00:09:53.170 "num_base_bdevs_discovered": 3, 00:09:53.170 "num_base_bdevs_operational": 4, 00:09:53.170 "base_bdevs_list": [ 00:09:53.170 { 00:09:53.170 "name": "BaseBdev1", 00:09:53.170 "uuid": "ab724edf-59d7-40a5-95a9-e34fd07616eb", 00:09:53.170 "is_configured": true, 00:09:53.170 "data_offset": 0, 00:09:53.170 "data_size": 65536 00:09:53.170 }, 00:09:53.170 { 00:09:53.170 "name": null, 00:09:53.170 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:53.170 "is_configured": false, 00:09:53.170 "data_offset": 0, 00:09:53.170 "data_size": 65536 00:09:53.170 }, 00:09:53.170 { 00:09:53.170 "name": "BaseBdev3", 00:09:53.170 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:53.170 "is_configured": true, 00:09:53.170 "data_offset": 0, 00:09:53.170 "data_size": 65536 00:09:53.170 }, 00:09:53.170 { 00:09:53.170 "name": "BaseBdev4", 00:09:53.170 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:53.170 "is_configured": true, 00:09:53.170 "data_offset": 0, 00:09:53.170 "data_size": 65536 00:09:53.170 } 00:09:53.170 ] 00:09:53.170 }' 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.170 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.430 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.430 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.430 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.430 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.430 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.430 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:53.430 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:53.430 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.430 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.430 [2024-11-16 18:50:36.901824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:53.689 18:50:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:53.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:53.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:53.689 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:53.690 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.690 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:53.690 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.690 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.690 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.690 18:50:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.690 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.690 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.690 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.690 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.690 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.690 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.690 "name": "Existed_Raid", 00:09:53.690 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.690 "strip_size_kb": 64, 00:09:53.690 "state": "configuring", 00:09:53.690 "raid_level": "raid0", 00:09:53.690 "superblock": false, 00:09:53.690 "num_base_bdevs": 4, 00:09:53.690 "num_base_bdevs_discovered": 2, 00:09:53.690 "num_base_bdevs_operational": 4, 00:09:53.690 "base_bdevs_list": [ 00:09:53.690 { 00:09:53.690 "name": null, 00:09:53.690 "uuid": "ab724edf-59d7-40a5-95a9-e34fd07616eb", 00:09:53.690 "is_configured": false, 00:09:53.690 "data_offset": 0, 00:09:53.690 "data_size": 65536 00:09:53.690 }, 00:09:53.690 { 00:09:53.690 "name": null, 00:09:53.690 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:53.690 "is_configured": false, 00:09:53.690 "data_offset": 0, 00:09:53.690 "data_size": 65536 00:09:53.690 }, 00:09:53.690 { 00:09:53.690 "name": "BaseBdev3", 00:09:53.690 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:53.690 "is_configured": true, 00:09:53.690 "data_offset": 0, 00:09:53.690 "data_size": 65536 00:09:53.690 }, 00:09:53.690 { 00:09:53.690 "name": "BaseBdev4", 00:09:53.690 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:53.690 "is_configured": true, 00:09:53.690 "data_offset": 0, 00:09:53.690 "data_size": 65536 00:09:53.690 } 00:09:53.690 ] 00:09:53.690 }' 00:09:53.690 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.690 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.259 [2024-11-16 18:50:37.505221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.259 "name": "Existed_Raid", 00:09:54.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.259 "strip_size_kb": 64, 00:09:54.259 "state": "configuring", 00:09:54.259 "raid_level": "raid0", 00:09:54.259 "superblock": false, 00:09:54.259 "num_base_bdevs": 4, 00:09:54.259 "num_base_bdevs_discovered": 3, 00:09:54.259 "num_base_bdevs_operational": 4, 00:09:54.259 "base_bdevs_list": [ 00:09:54.259 { 00:09:54.259 "name": null, 00:09:54.259 "uuid": "ab724edf-59d7-40a5-95a9-e34fd07616eb", 00:09:54.259 "is_configured": false, 00:09:54.259 "data_offset": 0, 00:09:54.259 "data_size": 65536 00:09:54.259 }, 00:09:54.259 { 00:09:54.259 "name": "BaseBdev2", 00:09:54.259 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:54.259 "is_configured": true, 00:09:54.259 "data_offset": 0, 00:09:54.259 "data_size": 65536 00:09:54.259 }, 00:09:54.259 { 00:09:54.259 "name": "BaseBdev3", 00:09:54.259 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:54.259 "is_configured": true, 00:09:54.259 "data_offset": 0, 00:09:54.259 "data_size": 65536 00:09:54.259 }, 00:09:54.259 { 00:09:54.259 "name": "BaseBdev4", 00:09:54.259 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:54.259 "is_configured": true, 00:09:54.259 "data_offset": 0, 00:09:54.259 "data_size": 65536 00:09:54.259 } 00:09:54.259 ] 00:09:54.259 }' 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.259 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.519 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:54.519 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.519 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.519 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.519 18:50:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.779 18:50:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ab724edf-59d7-40a5-95a9-e34fd07616eb 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.779 [2024-11-16 18:50:38.087844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:54.779 [2024-11-16 18:50:38.087954] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:54.779 [2024-11-16 18:50:38.087978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:09:54.779 [2024-11-16 18:50:38.088280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:54.779 [2024-11-16 18:50:38.088474] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:54.779 [2024-11-16 18:50:38.088518] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:54.779 [2024-11-16 18:50:38.088807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.779 NewBaseBdev 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.779 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.779 [ 00:09:54.779 { 00:09:54.779 "name": "NewBaseBdev", 00:09:54.779 "aliases": [ 00:09:54.779 "ab724edf-59d7-40a5-95a9-e34fd07616eb" 00:09:54.779 ], 00:09:54.779 "product_name": "Malloc disk", 00:09:54.780 "block_size": 512, 00:09:54.780 "num_blocks": 65536, 00:09:54.780 "uuid": "ab724edf-59d7-40a5-95a9-e34fd07616eb", 00:09:54.780 "assigned_rate_limits": { 00:09:54.780 "rw_ios_per_sec": 0, 00:09:54.780 "rw_mbytes_per_sec": 0, 00:09:54.780 "r_mbytes_per_sec": 0, 00:09:54.780 "w_mbytes_per_sec": 0 00:09:54.780 }, 00:09:54.780 "claimed": true, 00:09:54.780 "claim_type": "exclusive_write", 00:09:54.780 "zoned": false, 00:09:54.780 "supported_io_types": { 00:09:54.780 "read": true, 00:09:54.780 "write": true, 00:09:54.780 "unmap": true, 00:09:54.780 "flush": true, 00:09:54.780 "reset": true, 00:09:54.780 "nvme_admin": false, 00:09:54.780 "nvme_io": false, 00:09:54.780 "nvme_io_md": false, 00:09:54.780 "write_zeroes": true, 00:09:54.780 "zcopy": true, 00:09:54.780 "get_zone_info": false, 00:09:54.780 "zone_management": false, 00:09:54.780 "zone_append": false, 00:09:54.780 "compare": false, 00:09:54.780 "compare_and_write": false, 00:09:54.780 "abort": true, 00:09:54.780 "seek_hole": false, 00:09:54.780 "seek_data": false, 00:09:54.780 "copy": true, 00:09:54.780 "nvme_iov_md": false 00:09:54.780 }, 00:09:54.780 "memory_domains": [ 00:09:54.780 { 00:09:54.780 "dma_device_id": "system", 00:09:54.780 "dma_device_type": 1 00:09:54.780 }, 00:09:54.780 { 00:09:54.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.780 "dma_device_type": 2 00:09:54.780 } 00:09:54.780 ], 00:09:54.780 "driver_specific": {} 00:09:54.780 } 00:09:54.780 ] 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.780 "name": "Existed_Raid", 00:09:54.780 "uuid": "20c9e757-cfe6-4533-8f28-f7a8e3620865", 00:09:54.780 "strip_size_kb": 64, 00:09:54.780 "state": "online", 00:09:54.780 "raid_level": "raid0", 00:09:54.780 "superblock": false, 00:09:54.780 "num_base_bdevs": 4, 00:09:54.780 "num_base_bdevs_discovered": 4, 00:09:54.780 "num_base_bdevs_operational": 4, 00:09:54.780 "base_bdevs_list": [ 00:09:54.780 { 00:09:54.780 "name": "NewBaseBdev", 00:09:54.780 "uuid": "ab724edf-59d7-40a5-95a9-e34fd07616eb", 00:09:54.780 "is_configured": true, 00:09:54.780 "data_offset": 0, 00:09:54.780 "data_size": 65536 00:09:54.780 }, 00:09:54.780 { 00:09:54.780 "name": "BaseBdev2", 00:09:54.780 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:54.780 "is_configured": true, 00:09:54.780 "data_offset": 0, 00:09:54.780 "data_size": 65536 00:09:54.780 }, 00:09:54.780 { 00:09:54.780 "name": "BaseBdev3", 00:09:54.780 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:54.780 "is_configured": true, 00:09:54.780 "data_offset": 0, 00:09:54.780 "data_size": 65536 00:09:54.780 }, 00:09:54.780 { 00:09:54.780 "name": "BaseBdev4", 00:09:54.780 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:54.780 "is_configured": true, 00:09:54.780 "data_offset": 0, 00:09:54.780 "data_size": 65536 00:09:54.780 } 00:09:54.780 ] 00:09:54.780 }' 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.780 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:55.350 [2024-11-16 18:50:38.607369] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:55.350 "name": "Existed_Raid", 00:09:55.350 "aliases": [ 00:09:55.350 "20c9e757-cfe6-4533-8f28-f7a8e3620865" 00:09:55.350 ], 00:09:55.350 "product_name": "Raid Volume", 00:09:55.350 "block_size": 512, 00:09:55.350 "num_blocks": 262144, 00:09:55.350 "uuid": "20c9e757-cfe6-4533-8f28-f7a8e3620865", 00:09:55.350 "assigned_rate_limits": { 00:09:55.350 "rw_ios_per_sec": 0, 00:09:55.350 "rw_mbytes_per_sec": 0, 00:09:55.350 "r_mbytes_per_sec": 0, 00:09:55.350 "w_mbytes_per_sec": 0 00:09:55.350 }, 00:09:55.350 "claimed": false, 00:09:55.350 "zoned": false, 00:09:55.350 "supported_io_types": { 00:09:55.350 "read": true, 00:09:55.350 "write": true, 00:09:55.350 "unmap": true, 00:09:55.350 "flush": true, 00:09:55.350 "reset": true, 00:09:55.350 "nvme_admin": false, 00:09:55.350 "nvme_io": false, 00:09:55.350 "nvme_io_md": false, 00:09:55.350 "write_zeroes": true, 00:09:55.350 "zcopy": false, 00:09:55.350 "get_zone_info": false, 00:09:55.350 "zone_management": false, 00:09:55.350 "zone_append": false, 00:09:55.350 "compare": false, 00:09:55.350 "compare_and_write": false, 00:09:55.350 "abort": false, 00:09:55.350 "seek_hole": false, 00:09:55.350 "seek_data": false, 00:09:55.350 "copy": false, 00:09:55.350 "nvme_iov_md": false 00:09:55.350 }, 00:09:55.350 "memory_domains": [ 00:09:55.350 { 00:09:55.350 "dma_device_id": "system", 00:09:55.350 "dma_device_type": 1 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.350 "dma_device_type": 2 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "dma_device_id": "system", 00:09:55.350 "dma_device_type": 1 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.350 "dma_device_type": 2 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "dma_device_id": "system", 00:09:55.350 "dma_device_type": 1 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.350 "dma_device_type": 2 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "dma_device_id": "system", 00:09:55.350 "dma_device_type": 1 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.350 "dma_device_type": 2 00:09:55.350 } 00:09:55.350 ], 00:09:55.350 "driver_specific": { 00:09:55.350 "raid": { 00:09:55.350 "uuid": "20c9e757-cfe6-4533-8f28-f7a8e3620865", 00:09:55.350 "strip_size_kb": 64, 00:09:55.350 "state": "online", 00:09:55.350 "raid_level": "raid0", 00:09:55.350 "superblock": false, 00:09:55.350 "num_base_bdevs": 4, 00:09:55.350 "num_base_bdevs_discovered": 4, 00:09:55.350 "num_base_bdevs_operational": 4, 00:09:55.350 "base_bdevs_list": [ 00:09:55.350 { 00:09:55.350 "name": "NewBaseBdev", 00:09:55.350 "uuid": "ab724edf-59d7-40a5-95a9-e34fd07616eb", 00:09:55.350 "is_configured": true, 00:09:55.350 "data_offset": 0, 00:09:55.350 "data_size": 65536 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "name": "BaseBdev2", 00:09:55.350 "uuid": "65d81560-05c1-44cb-b336-ccaea4254ab9", 00:09:55.350 "is_configured": true, 00:09:55.350 "data_offset": 0, 00:09:55.350 "data_size": 65536 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "name": "BaseBdev3", 00:09:55.350 "uuid": "ee1032a5-0ccf-44e5-912e-54293c4007e1", 00:09:55.350 "is_configured": true, 00:09:55.350 "data_offset": 0, 00:09:55.350 "data_size": 65536 00:09:55.350 }, 00:09:55.350 { 00:09:55.350 "name": "BaseBdev4", 00:09:55.350 "uuid": "de841c9e-0b34-4d25-9144-795227f59399", 00:09:55.350 "is_configured": true, 00:09:55.350 "data_offset": 0, 00:09:55.350 "data_size": 65536 00:09:55.350 } 00:09:55.350 ] 00:09:55.350 } 00:09:55.350 } 00:09:55.350 }' 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:55.350 BaseBdev2 00:09:55.350 BaseBdev3 00:09:55.350 BaseBdev4' 00:09:55.350 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.351 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.611 [2024-11-16 18:50:38.894494] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.611 [2024-11-16 18:50:38.894523] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:55.611 [2024-11-16 18:50:38.894594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.611 [2024-11-16 18:50:38.894680] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.611 [2024-11-16 18:50:38.894690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69174 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69174 ']' 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69174 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69174 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69174' 00:09:55.611 killing process with pid 69174 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69174 00:09:55.611 [2024-11-16 18:50:38.945888] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.611 18:50:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69174 00:09:55.871 [2024-11-16 18:50:39.334436] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:57.253 00:09:57.253 real 0m11.357s 00:09:57.253 user 0m18.091s 00:09:57.253 sys 0m2.038s 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.253 ************************************ 00:09:57.253 END TEST raid_state_function_test 00:09:57.253 ************************************ 00:09:57.253 18:50:40 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:09:57.253 18:50:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:57.253 18:50:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.253 18:50:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:57.253 ************************************ 00:09:57.253 START TEST raid_state_function_test_sb 00:09:57.253 ************************************ 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:09:57.253 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:57.254 Process raid pid: 69841 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69841 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69841' 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69841 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69841 ']' 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.254 18:50:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.254 [2024-11-16 18:50:40.591900] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:57.254 [2024-11-16 18:50:40.592116] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:57.515 [2024-11-16 18:50:40.766309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.515 [2024-11-16 18:50:40.878238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.774 [2024-11-16 18:50:41.087533] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.774 [2024-11-16 18:50:41.087630] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.045 [2024-11-16 18:50:41.426974] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.045 [2024-11-16 18:50:41.427028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.045 [2024-11-16 18:50:41.427039] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.045 [2024-11-16 18:50:41.427048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.045 [2024-11-16 18:50:41.427055] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.045 [2024-11-16 18:50:41.427063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.045 [2024-11-16 18:50:41.427069] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.045 [2024-11-16 18:50:41.427077] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.045 "name": "Existed_Raid", 00:09:58.045 "uuid": "579915a1-604d-4d1b-ab5d-ab9ea5d10369", 00:09:58.045 "strip_size_kb": 64, 00:09:58.045 "state": "configuring", 00:09:58.045 "raid_level": "raid0", 00:09:58.045 "superblock": true, 00:09:58.045 "num_base_bdevs": 4, 00:09:58.045 "num_base_bdevs_discovered": 0, 00:09:58.045 "num_base_bdevs_operational": 4, 00:09:58.045 "base_bdevs_list": [ 00:09:58.045 { 00:09:58.045 "name": "BaseBdev1", 00:09:58.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.045 "is_configured": false, 00:09:58.045 "data_offset": 0, 00:09:58.045 "data_size": 0 00:09:58.045 }, 00:09:58.045 { 00:09:58.045 "name": "BaseBdev2", 00:09:58.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.045 "is_configured": false, 00:09:58.045 "data_offset": 0, 00:09:58.045 "data_size": 0 00:09:58.045 }, 00:09:58.045 { 00:09:58.045 "name": "BaseBdev3", 00:09:58.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.045 "is_configured": false, 00:09:58.045 "data_offset": 0, 00:09:58.045 "data_size": 0 00:09:58.045 }, 00:09:58.045 { 00:09:58.045 "name": "BaseBdev4", 00:09:58.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.045 "is_configured": false, 00:09:58.045 "data_offset": 0, 00:09:58.045 "data_size": 0 00:09:58.045 } 00:09:58.045 ] 00:09:58.045 }' 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.045 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.666 [2024-11-16 18:50:41.834192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:58.666 [2024-11-16 18:50:41.834228] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.666 [2024-11-16 18:50:41.842192] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.666 [2024-11-16 18:50:41.842235] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.666 [2024-11-16 18:50:41.842244] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:58.666 [2024-11-16 18:50:41.842253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:58.666 [2024-11-16 18:50:41.842258] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:58.666 [2024-11-16 18:50:41.842267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:58.666 [2024-11-16 18:50:41.842273] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:58.666 [2024-11-16 18:50:41.842281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.666 [2024-11-16 18:50:41.886532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:58.666 BaseBdev1 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.666 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.666 [ 00:09:58.666 { 00:09:58.666 "name": "BaseBdev1", 00:09:58.666 "aliases": [ 00:09:58.666 "845fc77d-d1cc-4123-9c7a-b6c79393d878" 00:09:58.666 ], 00:09:58.666 "product_name": "Malloc disk", 00:09:58.666 "block_size": 512, 00:09:58.666 "num_blocks": 65536, 00:09:58.666 "uuid": "845fc77d-d1cc-4123-9c7a-b6c79393d878", 00:09:58.666 "assigned_rate_limits": { 00:09:58.666 "rw_ios_per_sec": 0, 00:09:58.666 "rw_mbytes_per_sec": 0, 00:09:58.666 "r_mbytes_per_sec": 0, 00:09:58.666 "w_mbytes_per_sec": 0 00:09:58.666 }, 00:09:58.666 "claimed": true, 00:09:58.666 "claim_type": "exclusive_write", 00:09:58.666 "zoned": false, 00:09:58.666 "supported_io_types": { 00:09:58.666 "read": true, 00:09:58.666 "write": true, 00:09:58.666 "unmap": true, 00:09:58.666 "flush": true, 00:09:58.666 "reset": true, 00:09:58.666 "nvme_admin": false, 00:09:58.666 "nvme_io": false, 00:09:58.666 "nvme_io_md": false, 00:09:58.666 "write_zeroes": true, 00:09:58.666 "zcopy": true, 00:09:58.666 "get_zone_info": false, 00:09:58.666 "zone_management": false, 00:09:58.666 "zone_append": false, 00:09:58.666 "compare": false, 00:09:58.666 "compare_and_write": false, 00:09:58.667 "abort": true, 00:09:58.667 "seek_hole": false, 00:09:58.667 "seek_data": false, 00:09:58.667 "copy": true, 00:09:58.667 "nvme_iov_md": false 00:09:58.667 }, 00:09:58.667 "memory_domains": [ 00:09:58.667 { 00:09:58.667 "dma_device_id": "system", 00:09:58.667 "dma_device_type": 1 00:09:58.667 }, 00:09:58.667 { 00:09:58.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.667 "dma_device_type": 2 00:09:58.667 } 00:09:58.667 ], 00:09:58.667 "driver_specific": {} 00:09:58.667 } 00:09:58.667 ] 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.667 "name": "Existed_Raid", 00:09:58.667 "uuid": "7e46df9c-5aa2-4863-9f9a-763df6883649", 00:09:58.667 "strip_size_kb": 64, 00:09:58.667 "state": "configuring", 00:09:58.667 "raid_level": "raid0", 00:09:58.667 "superblock": true, 00:09:58.667 "num_base_bdevs": 4, 00:09:58.667 "num_base_bdevs_discovered": 1, 00:09:58.667 "num_base_bdevs_operational": 4, 00:09:58.667 "base_bdevs_list": [ 00:09:58.667 { 00:09:58.667 "name": "BaseBdev1", 00:09:58.667 "uuid": "845fc77d-d1cc-4123-9c7a-b6c79393d878", 00:09:58.667 "is_configured": true, 00:09:58.667 "data_offset": 2048, 00:09:58.667 "data_size": 63488 00:09:58.667 }, 00:09:58.667 { 00:09:58.667 "name": "BaseBdev2", 00:09:58.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.667 "is_configured": false, 00:09:58.667 "data_offset": 0, 00:09:58.667 "data_size": 0 00:09:58.667 }, 00:09:58.667 { 00:09:58.667 "name": "BaseBdev3", 00:09:58.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.667 "is_configured": false, 00:09:58.667 "data_offset": 0, 00:09:58.667 "data_size": 0 00:09:58.667 }, 00:09:58.667 { 00:09:58.667 "name": "BaseBdev4", 00:09:58.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.667 "is_configured": false, 00:09:58.667 "data_offset": 0, 00:09:58.667 "data_size": 0 00:09:58.667 } 00:09:58.667 ] 00:09:58.667 }' 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.667 18:50:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.238 [2024-11-16 18:50:42.409718] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.238 [2024-11-16 18:50:42.409851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.238 [2024-11-16 18:50:42.417762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.238 [2024-11-16 18:50:42.419589] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:59.238 [2024-11-16 18:50:42.419680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:59.238 [2024-11-16 18:50:42.419737] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:59.238 [2024-11-16 18:50:42.419764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:59.238 [2024-11-16 18:50:42.419783] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:09:59.238 [2024-11-16 18:50:42.419804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.238 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.239 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.239 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.239 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.239 "name": "Existed_Raid", 00:09:59.239 "uuid": "e4cbb3a8-927b-4883-8f72-dee9d7ed5eec", 00:09:59.239 "strip_size_kb": 64, 00:09:59.239 "state": "configuring", 00:09:59.239 "raid_level": "raid0", 00:09:59.239 "superblock": true, 00:09:59.239 "num_base_bdevs": 4, 00:09:59.239 "num_base_bdevs_discovered": 1, 00:09:59.239 "num_base_bdevs_operational": 4, 00:09:59.239 "base_bdevs_list": [ 00:09:59.239 { 00:09:59.239 "name": "BaseBdev1", 00:09:59.239 "uuid": "845fc77d-d1cc-4123-9c7a-b6c79393d878", 00:09:59.239 "is_configured": true, 00:09:59.239 "data_offset": 2048, 00:09:59.239 "data_size": 63488 00:09:59.239 }, 00:09:59.239 { 00:09:59.239 "name": "BaseBdev2", 00:09:59.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.239 "is_configured": false, 00:09:59.239 "data_offset": 0, 00:09:59.239 "data_size": 0 00:09:59.239 }, 00:09:59.239 { 00:09:59.239 "name": "BaseBdev3", 00:09:59.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.239 "is_configured": false, 00:09:59.239 "data_offset": 0, 00:09:59.239 "data_size": 0 00:09:59.239 }, 00:09:59.239 { 00:09:59.239 "name": "BaseBdev4", 00:09:59.239 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.239 "is_configured": false, 00:09:59.239 "data_offset": 0, 00:09:59.239 "data_size": 0 00:09:59.239 } 00:09:59.239 ] 00:09:59.239 }' 00:09:59.239 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.239 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.499 BaseBdev2 00:09:59.499 [2024-11-16 18:50:42.880309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.499 [ 00:09:59.499 { 00:09:59.499 "name": "BaseBdev2", 00:09:59.499 "aliases": [ 00:09:59.499 "d54de6a6-abda-4370-9f1e-9841d664b082" 00:09:59.499 ], 00:09:59.499 "product_name": "Malloc disk", 00:09:59.499 "block_size": 512, 00:09:59.499 "num_blocks": 65536, 00:09:59.499 "uuid": "d54de6a6-abda-4370-9f1e-9841d664b082", 00:09:59.499 "assigned_rate_limits": { 00:09:59.499 "rw_ios_per_sec": 0, 00:09:59.499 "rw_mbytes_per_sec": 0, 00:09:59.499 "r_mbytes_per_sec": 0, 00:09:59.499 "w_mbytes_per_sec": 0 00:09:59.499 }, 00:09:59.499 "claimed": true, 00:09:59.499 "claim_type": "exclusive_write", 00:09:59.499 "zoned": false, 00:09:59.499 "supported_io_types": { 00:09:59.499 "read": true, 00:09:59.499 "write": true, 00:09:59.499 "unmap": true, 00:09:59.499 "flush": true, 00:09:59.499 "reset": true, 00:09:59.499 "nvme_admin": false, 00:09:59.499 "nvme_io": false, 00:09:59.499 "nvme_io_md": false, 00:09:59.499 "write_zeroes": true, 00:09:59.499 "zcopy": true, 00:09:59.499 "get_zone_info": false, 00:09:59.499 "zone_management": false, 00:09:59.499 "zone_append": false, 00:09:59.499 "compare": false, 00:09:59.499 "compare_and_write": false, 00:09:59.499 "abort": true, 00:09:59.499 "seek_hole": false, 00:09:59.499 "seek_data": false, 00:09:59.499 "copy": true, 00:09:59.499 "nvme_iov_md": false 00:09:59.499 }, 00:09:59.499 "memory_domains": [ 00:09:59.499 { 00:09:59.499 "dma_device_id": "system", 00:09:59.499 "dma_device_type": 1 00:09:59.499 }, 00:09:59.499 { 00:09:59.499 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.499 "dma_device_type": 2 00:09:59.499 } 00:09:59.499 ], 00:09:59.499 "driver_specific": {} 00:09:59.499 } 00:09:59.499 ] 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.499 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.500 "name": "Existed_Raid", 00:09:59.500 "uuid": "e4cbb3a8-927b-4883-8f72-dee9d7ed5eec", 00:09:59.500 "strip_size_kb": 64, 00:09:59.500 "state": "configuring", 00:09:59.500 "raid_level": "raid0", 00:09:59.500 "superblock": true, 00:09:59.500 "num_base_bdevs": 4, 00:09:59.500 "num_base_bdevs_discovered": 2, 00:09:59.500 "num_base_bdevs_operational": 4, 00:09:59.500 "base_bdevs_list": [ 00:09:59.500 { 00:09:59.500 "name": "BaseBdev1", 00:09:59.500 "uuid": "845fc77d-d1cc-4123-9c7a-b6c79393d878", 00:09:59.500 "is_configured": true, 00:09:59.500 "data_offset": 2048, 00:09:59.500 "data_size": 63488 00:09:59.500 }, 00:09:59.500 { 00:09:59.500 "name": "BaseBdev2", 00:09:59.500 "uuid": "d54de6a6-abda-4370-9f1e-9841d664b082", 00:09:59.500 "is_configured": true, 00:09:59.500 "data_offset": 2048, 00:09:59.500 "data_size": 63488 00:09:59.500 }, 00:09:59.500 { 00:09:59.500 "name": "BaseBdev3", 00:09:59.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.500 "is_configured": false, 00:09:59.500 "data_offset": 0, 00:09:59.500 "data_size": 0 00:09:59.500 }, 00:09:59.500 { 00:09:59.500 "name": "BaseBdev4", 00:09:59.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.500 "is_configured": false, 00:09:59.500 "data_offset": 0, 00:09:59.500 "data_size": 0 00:09:59.500 } 00:09:59.500 ] 00:09:59.500 }' 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.500 18:50:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.069 [2024-11-16 18:50:43.414707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.069 BaseBdev3 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.069 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.069 [ 00:10:00.069 { 00:10:00.069 "name": "BaseBdev3", 00:10:00.069 "aliases": [ 00:10:00.069 "4ee31a79-def3-4d8d-87a0-a0b6e8c736e2" 00:10:00.069 ], 00:10:00.069 "product_name": "Malloc disk", 00:10:00.069 "block_size": 512, 00:10:00.069 "num_blocks": 65536, 00:10:00.069 "uuid": "4ee31a79-def3-4d8d-87a0-a0b6e8c736e2", 00:10:00.069 "assigned_rate_limits": { 00:10:00.069 "rw_ios_per_sec": 0, 00:10:00.069 "rw_mbytes_per_sec": 0, 00:10:00.069 "r_mbytes_per_sec": 0, 00:10:00.069 "w_mbytes_per_sec": 0 00:10:00.069 }, 00:10:00.069 "claimed": true, 00:10:00.069 "claim_type": "exclusive_write", 00:10:00.069 "zoned": false, 00:10:00.069 "supported_io_types": { 00:10:00.069 "read": true, 00:10:00.069 "write": true, 00:10:00.069 "unmap": true, 00:10:00.069 "flush": true, 00:10:00.069 "reset": true, 00:10:00.069 "nvme_admin": false, 00:10:00.069 "nvme_io": false, 00:10:00.069 "nvme_io_md": false, 00:10:00.069 "write_zeroes": true, 00:10:00.069 "zcopy": true, 00:10:00.069 "get_zone_info": false, 00:10:00.069 "zone_management": false, 00:10:00.069 "zone_append": false, 00:10:00.069 "compare": false, 00:10:00.069 "compare_and_write": false, 00:10:00.069 "abort": true, 00:10:00.069 "seek_hole": false, 00:10:00.069 "seek_data": false, 00:10:00.069 "copy": true, 00:10:00.069 "nvme_iov_md": false 00:10:00.069 }, 00:10:00.069 "memory_domains": [ 00:10:00.069 { 00:10:00.069 "dma_device_id": "system", 00:10:00.069 "dma_device_type": 1 00:10:00.069 }, 00:10:00.070 { 00:10:00.070 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.070 "dma_device_type": 2 00:10:00.070 } 00:10:00.070 ], 00:10:00.070 "driver_specific": {} 00:10:00.070 } 00:10:00.070 ] 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.070 "name": "Existed_Raid", 00:10:00.070 "uuid": "e4cbb3a8-927b-4883-8f72-dee9d7ed5eec", 00:10:00.070 "strip_size_kb": 64, 00:10:00.070 "state": "configuring", 00:10:00.070 "raid_level": "raid0", 00:10:00.070 "superblock": true, 00:10:00.070 "num_base_bdevs": 4, 00:10:00.070 "num_base_bdevs_discovered": 3, 00:10:00.070 "num_base_bdevs_operational": 4, 00:10:00.070 "base_bdevs_list": [ 00:10:00.070 { 00:10:00.070 "name": "BaseBdev1", 00:10:00.070 "uuid": "845fc77d-d1cc-4123-9c7a-b6c79393d878", 00:10:00.070 "is_configured": true, 00:10:00.070 "data_offset": 2048, 00:10:00.070 "data_size": 63488 00:10:00.070 }, 00:10:00.070 { 00:10:00.070 "name": "BaseBdev2", 00:10:00.070 "uuid": "d54de6a6-abda-4370-9f1e-9841d664b082", 00:10:00.070 "is_configured": true, 00:10:00.070 "data_offset": 2048, 00:10:00.070 "data_size": 63488 00:10:00.070 }, 00:10:00.070 { 00:10:00.070 "name": "BaseBdev3", 00:10:00.070 "uuid": "4ee31a79-def3-4d8d-87a0-a0b6e8c736e2", 00:10:00.070 "is_configured": true, 00:10:00.070 "data_offset": 2048, 00:10:00.070 "data_size": 63488 00:10:00.070 }, 00:10:00.070 { 00:10:00.070 "name": "BaseBdev4", 00:10:00.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.070 "is_configured": false, 00:10:00.070 "data_offset": 0, 00:10:00.070 "data_size": 0 00:10:00.070 } 00:10:00.070 ] 00:10:00.070 }' 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.070 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.639 [2024-11-16 18:50:43.845503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:00.639 [2024-11-16 18:50:43.845877] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:00.639 [2024-11-16 18:50:43.845900] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:00.639 [2024-11-16 18:50:43.846166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:00.639 [2024-11-16 18:50:43.846343] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:00.639 [2024-11-16 18:50:43.846357] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:00.639 BaseBdev4 00:10:00.639 [2024-11-16 18:50:43.846495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.639 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.640 [ 00:10:00.640 { 00:10:00.640 "name": "BaseBdev4", 00:10:00.640 "aliases": [ 00:10:00.640 "08d81d2d-b14c-444f-88d3-ff515fdb9280" 00:10:00.640 ], 00:10:00.640 "product_name": "Malloc disk", 00:10:00.640 "block_size": 512, 00:10:00.640 "num_blocks": 65536, 00:10:00.640 "uuid": "08d81d2d-b14c-444f-88d3-ff515fdb9280", 00:10:00.640 "assigned_rate_limits": { 00:10:00.640 "rw_ios_per_sec": 0, 00:10:00.640 "rw_mbytes_per_sec": 0, 00:10:00.640 "r_mbytes_per_sec": 0, 00:10:00.640 "w_mbytes_per_sec": 0 00:10:00.640 }, 00:10:00.640 "claimed": true, 00:10:00.640 "claim_type": "exclusive_write", 00:10:00.640 "zoned": false, 00:10:00.640 "supported_io_types": { 00:10:00.640 "read": true, 00:10:00.640 "write": true, 00:10:00.640 "unmap": true, 00:10:00.640 "flush": true, 00:10:00.640 "reset": true, 00:10:00.640 "nvme_admin": false, 00:10:00.640 "nvme_io": false, 00:10:00.640 "nvme_io_md": false, 00:10:00.640 "write_zeroes": true, 00:10:00.640 "zcopy": true, 00:10:00.640 "get_zone_info": false, 00:10:00.640 "zone_management": false, 00:10:00.640 "zone_append": false, 00:10:00.640 "compare": false, 00:10:00.640 "compare_and_write": false, 00:10:00.640 "abort": true, 00:10:00.640 "seek_hole": false, 00:10:00.640 "seek_data": false, 00:10:00.640 "copy": true, 00:10:00.640 "nvme_iov_md": false 00:10:00.640 }, 00:10:00.640 "memory_domains": [ 00:10:00.640 { 00:10:00.640 "dma_device_id": "system", 00:10:00.640 "dma_device_type": 1 00:10:00.640 }, 00:10:00.640 { 00:10:00.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.640 "dma_device_type": 2 00:10:00.640 } 00:10:00.640 ], 00:10:00.640 "driver_specific": {} 00:10:00.640 } 00:10:00.640 ] 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.640 "name": "Existed_Raid", 00:10:00.640 "uuid": "e4cbb3a8-927b-4883-8f72-dee9d7ed5eec", 00:10:00.640 "strip_size_kb": 64, 00:10:00.640 "state": "online", 00:10:00.640 "raid_level": "raid0", 00:10:00.640 "superblock": true, 00:10:00.640 "num_base_bdevs": 4, 00:10:00.640 "num_base_bdevs_discovered": 4, 00:10:00.640 "num_base_bdevs_operational": 4, 00:10:00.640 "base_bdevs_list": [ 00:10:00.640 { 00:10:00.640 "name": "BaseBdev1", 00:10:00.640 "uuid": "845fc77d-d1cc-4123-9c7a-b6c79393d878", 00:10:00.640 "is_configured": true, 00:10:00.640 "data_offset": 2048, 00:10:00.640 "data_size": 63488 00:10:00.640 }, 00:10:00.640 { 00:10:00.640 "name": "BaseBdev2", 00:10:00.640 "uuid": "d54de6a6-abda-4370-9f1e-9841d664b082", 00:10:00.640 "is_configured": true, 00:10:00.640 "data_offset": 2048, 00:10:00.640 "data_size": 63488 00:10:00.640 }, 00:10:00.640 { 00:10:00.640 "name": "BaseBdev3", 00:10:00.640 "uuid": "4ee31a79-def3-4d8d-87a0-a0b6e8c736e2", 00:10:00.640 "is_configured": true, 00:10:00.640 "data_offset": 2048, 00:10:00.640 "data_size": 63488 00:10:00.640 }, 00:10:00.640 { 00:10:00.640 "name": "BaseBdev4", 00:10:00.640 "uuid": "08d81d2d-b14c-444f-88d3-ff515fdb9280", 00:10:00.640 "is_configured": true, 00:10:00.640 "data_offset": 2048, 00:10:00.640 "data_size": 63488 00:10:00.640 } 00:10:00.640 ] 00:10:00.640 }' 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.640 18:50:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.900 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.900 [2024-11-16 18:50:44.357050] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:01.159 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.159 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:01.159 "name": "Existed_Raid", 00:10:01.159 "aliases": [ 00:10:01.159 "e4cbb3a8-927b-4883-8f72-dee9d7ed5eec" 00:10:01.159 ], 00:10:01.159 "product_name": "Raid Volume", 00:10:01.159 "block_size": 512, 00:10:01.159 "num_blocks": 253952, 00:10:01.159 "uuid": "e4cbb3a8-927b-4883-8f72-dee9d7ed5eec", 00:10:01.159 "assigned_rate_limits": { 00:10:01.159 "rw_ios_per_sec": 0, 00:10:01.159 "rw_mbytes_per_sec": 0, 00:10:01.159 "r_mbytes_per_sec": 0, 00:10:01.159 "w_mbytes_per_sec": 0 00:10:01.159 }, 00:10:01.159 "claimed": false, 00:10:01.159 "zoned": false, 00:10:01.159 "supported_io_types": { 00:10:01.159 "read": true, 00:10:01.159 "write": true, 00:10:01.159 "unmap": true, 00:10:01.159 "flush": true, 00:10:01.159 "reset": true, 00:10:01.159 "nvme_admin": false, 00:10:01.159 "nvme_io": false, 00:10:01.159 "nvme_io_md": false, 00:10:01.159 "write_zeroes": true, 00:10:01.159 "zcopy": false, 00:10:01.159 "get_zone_info": false, 00:10:01.159 "zone_management": false, 00:10:01.159 "zone_append": false, 00:10:01.159 "compare": false, 00:10:01.159 "compare_and_write": false, 00:10:01.159 "abort": false, 00:10:01.159 "seek_hole": false, 00:10:01.159 "seek_data": false, 00:10:01.159 "copy": false, 00:10:01.159 "nvme_iov_md": false 00:10:01.159 }, 00:10:01.159 "memory_domains": [ 00:10:01.159 { 00:10:01.159 "dma_device_id": "system", 00:10:01.159 "dma_device_type": 1 00:10:01.159 }, 00:10:01.159 { 00:10:01.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.159 "dma_device_type": 2 00:10:01.159 }, 00:10:01.159 { 00:10:01.160 "dma_device_id": "system", 00:10:01.160 "dma_device_type": 1 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.160 "dma_device_type": 2 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "dma_device_id": "system", 00:10:01.160 "dma_device_type": 1 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.160 "dma_device_type": 2 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "dma_device_id": "system", 00:10:01.160 "dma_device_type": 1 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.160 "dma_device_type": 2 00:10:01.160 } 00:10:01.160 ], 00:10:01.160 "driver_specific": { 00:10:01.160 "raid": { 00:10:01.160 "uuid": "e4cbb3a8-927b-4883-8f72-dee9d7ed5eec", 00:10:01.160 "strip_size_kb": 64, 00:10:01.160 "state": "online", 00:10:01.160 "raid_level": "raid0", 00:10:01.160 "superblock": true, 00:10:01.160 "num_base_bdevs": 4, 00:10:01.160 "num_base_bdevs_discovered": 4, 00:10:01.160 "num_base_bdevs_operational": 4, 00:10:01.160 "base_bdevs_list": [ 00:10:01.160 { 00:10:01.160 "name": "BaseBdev1", 00:10:01.160 "uuid": "845fc77d-d1cc-4123-9c7a-b6c79393d878", 00:10:01.160 "is_configured": true, 00:10:01.160 "data_offset": 2048, 00:10:01.160 "data_size": 63488 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "name": "BaseBdev2", 00:10:01.160 "uuid": "d54de6a6-abda-4370-9f1e-9841d664b082", 00:10:01.160 "is_configured": true, 00:10:01.160 "data_offset": 2048, 00:10:01.160 "data_size": 63488 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "name": "BaseBdev3", 00:10:01.160 "uuid": "4ee31a79-def3-4d8d-87a0-a0b6e8c736e2", 00:10:01.160 "is_configured": true, 00:10:01.160 "data_offset": 2048, 00:10:01.160 "data_size": 63488 00:10:01.160 }, 00:10:01.160 { 00:10:01.160 "name": "BaseBdev4", 00:10:01.160 "uuid": "08d81d2d-b14c-444f-88d3-ff515fdb9280", 00:10:01.160 "is_configured": true, 00:10:01.160 "data_offset": 2048, 00:10:01.160 "data_size": 63488 00:10:01.160 } 00:10:01.160 ] 00:10:01.160 } 00:10:01.160 } 00:10:01.160 }' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:01.160 BaseBdev2 00:10:01.160 BaseBdev3 00:10:01.160 BaseBdev4' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:01.160 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.420 [2024-11-16 18:50:44.668223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.420 [2024-11-16 18:50:44.668297] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:01.420 [2024-11-16 18:50:44.668369] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.420 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.421 "name": "Existed_Raid", 00:10:01.421 "uuid": "e4cbb3a8-927b-4883-8f72-dee9d7ed5eec", 00:10:01.421 "strip_size_kb": 64, 00:10:01.421 "state": "offline", 00:10:01.421 "raid_level": "raid0", 00:10:01.421 "superblock": true, 00:10:01.421 "num_base_bdevs": 4, 00:10:01.421 "num_base_bdevs_discovered": 3, 00:10:01.421 "num_base_bdevs_operational": 3, 00:10:01.421 "base_bdevs_list": [ 00:10:01.421 { 00:10:01.421 "name": null, 00:10:01.421 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.421 "is_configured": false, 00:10:01.421 "data_offset": 0, 00:10:01.421 "data_size": 63488 00:10:01.421 }, 00:10:01.421 { 00:10:01.421 "name": "BaseBdev2", 00:10:01.421 "uuid": "d54de6a6-abda-4370-9f1e-9841d664b082", 00:10:01.421 "is_configured": true, 00:10:01.421 "data_offset": 2048, 00:10:01.421 "data_size": 63488 00:10:01.421 }, 00:10:01.421 { 00:10:01.421 "name": "BaseBdev3", 00:10:01.421 "uuid": "4ee31a79-def3-4d8d-87a0-a0b6e8c736e2", 00:10:01.421 "is_configured": true, 00:10:01.421 "data_offset": 2048, 00:10:01.421 "data_size": 63488 00:10:01.421 }, 00:10:01.421 { 00:10:01.421 "name": "BaseBdev4", 00:10:01.421 "uuid": "08d81d2d-b14c-444f-88d3-ff515fdb9280", 00:10:01.421 "is_configured": true, 00:10:01.421 "data_offset": 2048, 00:10:01.421 "data_size": 63488 00:10:01.421 } 00:10:01.421 ] 00:10:01.421 }' 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.421 18:50:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.992 [2024-11-16 18:50:45.272786] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.992 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.992 [2024-11-16 18:50:45.423390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.252 [2024-11-16 18:50:45.577813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:02.252 [2024-11-16 18:50:45.577860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:02.252 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.513 BaseBdev2 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.513 [ 00:10:02.513 { 00:10:02.513 "name": "BaseBdev2", 00:10:02.513 "aliases": [ 00:10:02.513 "5177af22-abee-4f69-bb0b-dfad1bb9a2e6" 00:10:02.513 ], 00:10:02.513 "product_name": "Malloc disk", 00:10:02.513 "block_size": 512, 00:10:02.513 "num_blocks": 65536, 00:10:02.513 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:02.513 "assigned_rate_limits": { 00:10:02.513 "rw_ios_per_sec": 0, 00:10:02.513 "rw_mbytes_per_sec": 0, 00:10:02.513 "r_mbytes_per_sec": 0, 00:10:02.513 "w_mbytes_per_sec": 0 00:10:02.513 }, 00:10:02.513 "claimed": false, 00:10:02.513 "zoned": false, 00:10:02.513 "supported_io_types": { 00:10:02.513 "read": true, 00:10:02.513 "write": true, 00:10:02.513 "unmap": true, 00:10:02.513 "flush": true, 00:10:02.513 "reset": true, 00:10:02.513 "nvme_admin": false, 00:10:02.513 "nvme_io": false, 00:10:02.513 "nvme_io_md": false, 00:10:02.513 "write_zeroes": true, 00:10:02.513 "zcopy": true, 00:10:02.513 "get_zone_info": false, 00:10:02.513 "zone_management": false, 00:10:02.513 "zone_append": false, 00:10:02.513 "compare": false, 00:10:02.513 "compare_and_write": false, 00:10:02.513 "abort": true, 00:10:02.513 "seek_hole": false, 00:10:02.513 "seek_data": false, 00:10:02.513 "copy": true, 00:10:02.513 "nvme_iov_md": false 00:10:02.513 }, 00:10:02.513 "memory_domains": [ 00:10:02.513 { 00:10:02.513 "dma_device_id": "system", 00:10:02.513 "dma_device_type": 1 00:10:02.513 }, 00:10:02.513 { 00:10:02.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.513 "dma_device_type": 2 00:10:02.513 } 00:10:02.513 ], 00:10:02.513 "driver_specific": {} 00:10:02.513 } 00:10:02.513 ] 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.513 BaseBdev3 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.513 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.513 [ 00:10:02.513 { 00:10:02.513 "name": "BaseBdev3", 00:10:02.513 "aliases": [ 00:10:02.513 "1330763d-77d9-4b5e-b028-aec911f170b6" 00:10:02.513 ], 00:10:02.513 "product_name": "Malloc disk", 00:10:02.513 "block_size": 512, 00:10:02.513 "num_blocks": 65536, 00:10:02.513 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:02.513 "assigned_rate_limits": { 00:10:02.513 "rw_ios_per_sec": 0, 00:10:02.513 "rw_mbytes_per_sec": 0, 00:10:02.513 "r_mbytes_per_sec": 0, 00:10:02.513 "w_mbytes_per_sec": 0 00:10:02.513 }, 00:10:02.514 "claimed": false, 00:10:02.514 "zoned": false, 00:10:02.514 "supported_io_types": { 00:10:02.514 "read": true, 00:10:02.514 "write": true, 00:10:02.514 "unmap": true, 00:10:02.514 "flush": true, 00:10:02.514 "reset": true, 00:10:02.514 "nvme_admin": false, 00:10:02.514 "nvme_io": false, 00:10:02.514 "nvme_io_md": false, 00:10:02.514 "write_zeroes": true, 00:10:02.514 "zcopy": true, 00:10:02.514 "get_zone_info": false, 00:10:02.514 "zone_management": false, 00:10:02.514 "zone_append": false, 00:10:02.514 "compare": false, 00:10:02.514 "compare_and_write": false, 00:10:02.514 "abort": true, 00:10:02.514 "seek_hole": false, 00:10:02.514 "seek_data": false, 00:10:02.514 "copy": true, 00:10:02.514 "nvme_iov_md": false 00:10:02.514 }, 00:10:02.514 "memory_domains": [ 00:10:02.514 { 00:10:02.514 "dma_device_id": "system", 00:10:02.514 "dma_device_type": 1 00:10:02.514 }, 00:10:02.514 { 00:10:02.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.514 "dma_device_type": 2 00:10:02.514 } 00:10:02.514 ], 00:10:02.514 "driver_specific": {} 00:10:02.514 } 00:10:02.514 ] 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.514 BaseBdev4 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.514 [ 00:10:02.514 { 00:10:02.514 "name": "BaseBdev4", 00:10:02.514 "aliases": [ 00:10:02.514 "d9f9f695-64b9-4a00-91c6-ceff5feebe4c" 00:10:02.514 ], 00:10:02.514 "product_name": "Malloc disk", 00:10:02.514 "block_size": 512, 00:10:02.514 "num_blocks": 65536, 00:10:02.514 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:02.514 "assigned_rate_limits": { 00:10:02.514 "rw_ios_per_sec": 0, 00:10:02.514 "rw_mbytes_per_sec": 0, 00:10:02.514 "r_mbytes_per_sec": 0, 00:10:02.514 "w_mbytes_per_sec": 0 00:10:02.514 }, 00:10:02.514 "claimed": false, 00:10:02.514 "zoned": false, 00:10:02.514 "supported_io_types": { 00:10:02.514 "read": true, 00:10:02.514 "write": true, 00:10:02.514 "unmap": true, 00:10:02.514 "flush": true, 00:10:02.514 "reset": true, 00:10:02.514 "nvme_admin": false, 00:10:02.514 "nvme_io": false, 00:10:02.514 "nvme_io_md": false, 00:10:02.514 "write_zeroes": true, 00:10:02.514 "zcopy": true, 00:10:02.514 "get_zone_info": false, 00:10:02.514 "zone_management": false, 00:10:02.514 "zone_append": false, 00:10:02.514 "compare": false, 00:10:02.514 "compare_and_write": false, 00:10:02.514 "abort": true, 00:10:02.514 "seek_hole": false, 00:10:02.514 "seek_data": false, 00:10:02.514 "copy": true, 00:10:02.514 "nvme_iov_md": false 00:10:02.514 }, 00:10:02.514 "memory_domains": [ 00:10:02.514 { 00:10:02.514 "dma_device_id": "system", 00:10:02.514 "dma_device_type": 1 00:10:02.514 }, 00:10:02.514 { 00:10:02.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.514 "dma_device_type": 2 00:10:02.514 } 00:10:02.514 ], 00:10:02.514 "driver_specific": {} 00:10:02.514 } 00:10:02.514 ] 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.514 [2024-11-16 18:50:45.966106] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:02.514 [2024-11-16 18:50:45.966205] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:02.514 [2024-11-16 18:50:45.966246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.514 [2024-11-16 18:50:45.968092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.514 [2024-11-16 18:50:45.968189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.514 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.773 18:50:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.773 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.773 "name": "Existed_Raid", 00:10:02.773 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:02.773 "strip_size_kb": 64, 00:10:02.773 "state": "configuring", 00:10:02.773 "raid_level": "raid0", 00:10:02.773 "superblock": true, 00:10:02.773 "num_base_bdevs": 4, 00:10:02.773 "num_base_bdevs_discovered": 3, 00:10:02.773 "num_base_bdevs_operational": 4, 00:10:02.773 "base_bdevs_list": [ 00:10:02.773 { 00:10:02.773 "name": "BaseBdev1", 00:10:02.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.773 "is_configured": false, 00:10:02.773 "data_offset": 0, 00:10:02.773 "data_size": 0 00:10:02.773 }, 00:10:02.773 { 00:10:02.773 "name": "BaseBdev2", 00:10:02.773 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:02.773 "is_configured": true, 00:10:02.773 "data_offset": 2048, 00:10:02.773 "data_size": 63488 00:10:02.773 }, 00:10:02.773 { 00:10:02.773 "name": "BaseBdev3", 00:10:02.773 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:02.773 "is_configured": true, 00:10:02.773 "data_offset": 2048, 00:10:02.773 "data_size": 63488 00:10:02.773 }, 00:10:02.773 { 00:10:02.773 "name": "BaseBdev4", 00:10:02.773 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:02.773 "is_configured": true, 00:10:02.773 "data_offset": 2048, 00:10:02.773 "data_size": 63488 00:10:02.773 } 00:10:02.773 ] 00:10:02.773 }' 00:10:02.773 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.773 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.034 [2024-11-16 18:50:46.417362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.034 "name": "Existed_Raid", 00:10:03.034 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:03.034 "strip_size_kb": 64, 00:10:03.034 "state": "configuring", 00:10:03.034 "raid_level": "raid0", 00:10:03.034 "superblock": true, 00:10:03.034 "num_base_bdevs": 4, 00:10:03.034 "num_base_bdevs_discovered": 2, 00:10:03.034 "num_base_bdevs_operational": 4, 00:10:03.034 "base_bdevs_list": [ 00:10:03.034 { 00:10:03.034 "name": "BaseBdev1", 00:10:03.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.034 "is_configured": false, 00:10:03.034 "data_offset": 0, 00:10:03.034 "data_size": 0 00:10:03.034 }, 00:10:03.034 { 00:10:03.034 "name": null, 00:10:03.034 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:03.034 "is_configured": false, 00:10:03.034 "data_offset": 0, 00:10:03.034 "data_size": 63488 00:10:03.034 }, 00:10:03.034 { 00:10:03.034 "name": "BaseBdev3", 00:10:03.034 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:03.034 "is_configured": true, 00:10:03.034 "data_offset": 2048, 00:10:03.034 "data_size": 63488 00:10:03.034 }, 00:10:03.034 { 00:10:03.034 "name": "BaseBdev4", 00:10:03.034 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:03.034 "is_configured": true, 00:10:03.034 "data_offset": 2048, 00:10:03.034 "data_size": 63488 00:10:03.034 } 00:10:03.034 ] 00:10:03.034 }' 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.034 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.604 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.605 [2024-11-16 18:50:46.912774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.605 BaseBdev1 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.605 [ 00:10:03.605 { 00:10:03.605 "name": "BaseBdev1", 00:10:03.605 "aliases": [ 00:10:03.605 "2d582817-a63f-4c32-aeda-7217e857f18e" 00:10:03.605 ], 00:10:03.605 "product_name": "Malloc disk", 00:10:03.605 "block_size": 512, 00:10:03.605 "num_blocks": 65536, 00:10:03.605 "uuid": "2d582817-a63f-4c32-aeda-7217e857f18e", 00:10:03.605 "assigned_rate_limits": { 00:10:03.605 "rw_ios_per_sec": 0, 00:10:03.605 "rw_mbytes_per_sec": 0, 00:10:03.605 "r_mbytes_per_sec": 0, 00:10:03.605 "w_mbytes_per_sec": 0 00:10:03.605 }, 00:10:03.605 "claimed": true, 00:10:03.605 "claim_type": "exclusive_write", 00:10:03.605 "zoned": false, 00:10:03.605 "supported_io_types": { 00:10:03.605 "read": true, 00:10:03.605 "write": true, 00:10:03.605 "unmap": true, 00:10:03.605 "flush": true, 00:10:03.605 "reset": true, 00:10:03.605 "nvme_admin": false, 00:10:03.605 "nvme_io": false, 00:10:03.605 "nvme_io_md": false, 00:10:03.605 "write_zeroes": true, 00:10:03.605 "zcopy": true, 00:10:03.605 "get_zone_info": false, 00:10:03.605 "zone_management": false, 00:10:03.605 "zone_append": false, 00:10:03.605 "compare": false, 00:10:03.605 "compare_and_write": false, 00:10:03.605 "abort": true, 00:10:03.605 "seek_hole": false, 00:10:03.605 "seek_data": false, 00:10:03.605 "copy": true, 00:10:03.605 "nvme_iov_md": false 00:10:03.605 }, 00:10:03.605 "memory_domains": [ 00:10:03.605 { 00:10:03.605 "dma_device_id": "system", 00:10:03.605 "dma_device_type": 1 00:10:03.605 }, 00:10:03.605 { 00:10:03.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.605 "dma_device_type": 2 00:10:03.605 } 00:10:03.605 ], 00:10:03.605 "driver_specific": {} 00:10:03.605 } 00:10:03.605 ] 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.605 "name": "Existed_Raid", 00:10:03.605 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:03.605 "strip_size_kb": 64, 00:10:03.605 "state": "configuring", 00:10:03.605 "raid_level": "raid0", 00:10:03.605 "superblock": true, 00:10:03.605 "num_base_bdevs": 4, 00:10:03.605 "num_base_bdevs_discovered": 3, 00:10:03.605 "num_base_bdevs_operational": 4, 00:10:03.605 "base_bdevs_list": [ 00:10:03.605 { 00:10:03.605 "name": "BaseBdev1", 00:10:03.605 "uuid": "2d582817-a63f-4c32-aeda-7217e857f18e", 00:10:03.605 "is_configured": true, 00:10:03.605 "data_offset": 2048, 00:10:03.605 "data_size": 63488 00:10:03.605 }, 00:10:03.605 { 00:10:03.605 "name": null, 00:10:03.605 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:03.605 "is_configured": false, 00:10:03.605 "data_offset": 0, 00:10:03.605 "data_size": 63488 00:10:03.605 }, 00:10:03.605 { 00:10:03.605 "name": "BaseBdev3", 00:10:03.605 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:03.605 "is_configured": true, 00:10:03.605 "data_offset": 2048, 00:10:03.605 "data_size": 63488 00:10:03.605 }, 00:10:03.605 { 00:10:03.605 "name": "BaseBdev4", 00:10:03.605 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:03.605 "is_configured": true, 00:10:03.605 "data_offset": 2048, 00:10:03.605 "data_size": 63488 00:10:03.605 } 00:10:03.605 ] 00:10:03.605 }' 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.605 18:50:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.176 [2024-11-16 18:50:47.447939] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.176 "name": "Existed_Raid", 00:10:04.176 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:04.176 "strip_size_kb": 64, 00:10:04.176 "state": "configuring", 00:10:04.176 "raid_level": "raid0", 00:10:04.176 "superblock": true, 00:10:04.176 "num_base_bdevs": 4, 00:10:04.176 "num_base_bdevs_discovered": 2, 00:10:04.176 "num_base_bdevs_operational": 4, 00:10:04.176 "base_bdevs_list": [ 00:10:04.176 { 00:10:04.176 "name": "BaseBdev1", 00:10:04.176 "uuid": "2d582817-a63f-4c32-aeda-7217e857f18e", 00:10:04.176 "is_configured": true, 00:10:04.176 "data_offset": 2048, 00:10:04.176 "data_size": 63488 00:10:04.176 }, 00:10:04.176 { 00:10:04.176 "name": null, 00:10:04.176 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:04.176 "is_configured": false, 00:10:04.176 "data_offset": 0, 00:10:04.176 "data_size": 63488 00:10:04.176 }, 00:10:04.176 { 00:10:04.176 "name": null, 00:10:04.176 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:04.176 "is_configured": false, 00:10:04.176 "data_offset": 0, 00:10:04.176 "data_size": 63488 00:10:04.176 }, 00:10:04.176 { 00:10:04.176 "name": "BaseBdev4", 00:10:04.176 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:04.176 "is_configured": true, 00:10:04.176 "data_offset": 2048, 00:10:04.176 "data_size": 63488 00:10:04.176 } 00:10:04.176 ] 00:10:04.176 }' 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.176 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.437 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.437 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.437 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.437 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.437 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.697 [2024-11-16 18:50:47.935100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:04.697 "name": "Existed_Raid", 00:10:04.697 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:04.697 "strip_size_kb": 64, 00:10:04.697 "state": "configuring", 00:10:04.697 "raid_level": "raid0", 00:10:04.697 "superblock": true, 00:10:04.697 "num_base_bdevs": 4, 00:10:04.697 "num_base_bdevs_discovered": 3, 00:10:04.697 "num_base_bdevs_operational": 4, 00:10:04.697 "base_bdevs_list": [ 00:10:04.697 { 00:10:04.697 "name": "BaseBdev1", 00:10:04.697 "uuid": "2d582817-a63f-4c32-aeda-7217e857f18e", 00:10:04.697 "is_configured": true, 00:10:04.697 "data_offset": 2048, 00:10:04.697 "data_size": 63488 00:10:04.697 }, 00:10:04.697 { 00:10:04.697 "name": null, 00:10:04.697 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:04.697 "is_configured": false, 00:10:04.697 "data_offset": 0, 00:10:04.697 "data_size": 63488 00:10:04.697 }, 00:10:04.697 { 00:10:04.697 "name": "BaseBdev3", 00:10:04.697 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:04.697 "is_configured": true, 00:10:04.697 "data_offset": 2048, 00:10:04.697 "data_size": 63488 00:10:04.697 }, 00:10:04.697 { 00:10:04.697 "name": "BaseBdev4", 00:10:04.697 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:04.697 "is_configured": true, 00:10:04.697 "data_offset": 2048, 00:10:04.697 "data_size": 63488 00:10:04.697 } 00:10:04.697 ] 00:10:04.697 }' 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:04.697 18:50:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.957 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:04.957 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.957 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.957 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:04.957 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.957 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:04.957 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:04.957 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.957 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.957 [2024-11-16 18:50:48.422283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.218 "name": "Existed_Raid", 00:10:05.218 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:05.218 "strip_size_kb": 64, 00:10:05.218 "state": "configuring", 00:10:05.218 "raid_level": "raid0", 00:10:05.218 "superblock": true, 00:10:05.218 "num_base_bdevs": 4, 00:10:05.218 "num_base_bdevs_discovered": 2, 00:10:05.218 "num_base_bdevs_operational": 4, 00:10:05.218 "base_bdevs_list": [ 00:10:05.218 { 00:10:05.218 "name": null, 00:10:05.218 "uuid": "2d582817-a63f-4c32-aeda-7217e857f18e", 00:10:05.218 "is_configured": false, 00:10:05.218 "data_offset": 0, 00:10:05.218 "data_size": 63488 00:10:05.218 }, 00:10:05.218 { 00:10:05.218 "name": null, 00:10:05.218 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:05.218 "is_configured": false, 00:10:05.218 "data_offset": 0, 00:10:05.218 "data_size": 63488 00:10:05.218 }, 00:10:05.218 { 00:10:05.218 "name": "BaseBdev3", 00:10:05.218 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:05.218 "is_configured": true, 00:10:05.218 "data_offset": 2048, 00:10:05.218 "data_size": 63488 00:10:05.218 }, 00:10:05.218 { 00:10:05.218 "name": "BaseBdev4", 00:10:05.218 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:05.218 "is_configured": true, 00:10:05.218 "data_offset": 2048, 00:10:05.218 "data_size": 63488 00:10:05.218 } 00:10:05.218 ] 00:10:05.218 }' 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.218 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.478 [2024-11-16 18:50:48.931381] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:05.478 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.738 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:05.738 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.738 "name": "Existed_Raid", 00:10:05.738 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:05.738 "strip_size_kb": 64, 00:10:05.738 "state": "configuring", 00:10:05.738 "raid_level": "raid0", 00:10:05.738 "superblock": true, 00:10:05.738 "num_base_bdevs": 4, 00:10:05.738 "num_base_bdevs_discovered": 3, 00:10:05.738 "num_base_bdevs_operational": 4, 00:10:05.738 "base_bdevs_list": [ 00:10:05.738 { 00:10:05.738 "name": null, 00:10:05.738 "uuid": "2d582817-a63f-4c32-aeda-7217e857f18e", 00:10:05.738 "is_configured": false, 00:10:05.738 "data_offset": 0, 00:10:05.738 "data_size": 63488 00:10:05.738 }, 00:10:05.738 { 00:10:05.738 "name": "BaseBdev2", 00:10:05.738 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:05.738 "is_configured": true, 00:10:05.738 "data_offset": 2048, 00:10:05.738 "data_size": 63488 00:10:05.738 }, 00:10:05.738 { 00:10:05.738 "name": "BaseBdev3", 00:10:05.738 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:05.738 "is_configured": true, 00:10:05.738 "data_offset": 2048, 00:10:05.738 "data_size": 63488 00:10:05.738 }, 00:10:05.738 { 00:10:05.738 "name": "BaseBdev4", 00:10:05.738 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:05.738 "is_configured": true, 00:10:05.738 "data_offset": 2048, 00:10:05.738 "data_size": 63488 00:10:05.738 } 00:10:05.738 ] 00:10:05.738 }' 00:10:05.738 18:50:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:05.738 18:50:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 2d582817-a63f-4c32-aeda-7217e857f18e 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.000 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.263 [2024-11-16 18:50:49.495142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:06.263 [2024-11-16 18:50:49.495463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:06.263 [2024-11-16 18:50:49.495481] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:06.263 [2024-11-16 18:50:49.495760] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:06.263 [2024-11-16 18:50:49.495921] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:06.263 [2024-11-16 18:50:49.495935] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:06.263 NewBaseBdev 00:10:06.263 [2024-11-16 18:50:49.496059] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.263 [ 00:10:06.263 { 00:10:06.263 "name": "NewBaseBdev", 00:10:06.263 "aliases": [ 00:10:06.263 "2d582817-a63f-4c32-aeda-7217e857f18e" 00:10:06.263 ], 00:10:06.263 "product_name": "Malloc disk", 00:10:06.263 "block_size": 512, 00:10:06.263 "num_blocks": 65536, 00:10:06.263 "uuid": "2d582817-a63f-4c32-aeda-7217e857f18e", 00:10:06.263 "assigned_rate_limits": { 00:10:06.263 "rw_ios_per_sec": 0, 00:10:06.263 "rw_mbytes_per_sec": 0, 00:10:06.263 "r_mbytes_per_sec": 0, 00:10:06.263 "w_mbytes_per_sec": 0 00:10:06.263 }, 00:10:06.263 "claimed": true, 00:10:06.263 "claim_type": "exclusive_write", 00:10:06.263 "zoned": false, 00:10:06.263 "supported_io_types": { 00:10:06.263 "read": true, 00:10:06.263 "write": true, 00:10:06.263 "unmap": true, 00:10:06.263 "flush": true, 00:10:06.263 "reset": true, 00:10:06.263 "nvme_admin": false, 00:10:06.263 "nvme_io": false, 00:10:06.263 "nvme_io_md": false, 00:10:06.263 "write_zeroes": true, 00:10:06.263 "zcopy": true, 00:10:06.263 "get_zone_info": false, 00:10:06.263 "zone_management": false, 00:10:06.263 "zone_append": false, 00:10:06.263 "compare": false, 00:10:06.263 "compare_and_write": false, 00:10:06.263 "abort": true, 00:10:06.263 "seek_hole": false, 00:10:06.263 "seek_data": false, 00:10:06.263 "copy": true, 00:10:06.263 "nvme_iov_md": false 00:10:06.263 }, 00:10:06.263 "memory_domains": [ 00:10:06.263 { 00:10:06.263 "dma_device_id": "system", 00:10:06.263 "dma_device_type": 1 00:10:06.263 }, 00:10:06.263 { 00:10:06.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.263 "dma_device_type": 2 00:10:06.263 } 00:10:06.263 ], 00:10:06.263 "driver_specific": {} 00:10:06.263 } 00:10:06.263 ] 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.263 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.264 "name": "Existed_Raid", 00:10:06.264 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:06.264 "strip_size_kb": 64, 00:10:06.264 "state": "online", 00:10:06.264 "raid_level": "raid0", 00:10:06.264 "superblock": true, 00:10:06.264 "num_base_bdevs": 4, 00:10:06.264 "num_base_bdevs_discovered": 4, 00:10:06.264 "num_base_bdevs_operational": 4, 00:10:06.264 "base_bdevs_list": [ 00:10:06.264 { 00:10:06.264 "name": "NewBaseBdev", 00:10:06.264 "uuid": "2d582817-a63f-4c32-aeda-7217e857f18e", 00:10:06.264 "is_configured": true, 00:10:06.264 "data_offset": 2048, 00:10:06.264 "data_size": 63488 00:10:06.264 }, 00:10:06.264 { 00:10:06.264 "name": "BaseBdev2", 00:10:06.264 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:06.264 "is_configured": true, 00:10:06.264 "data_offset": 2048, 00:10:06.264 "data_size": 63488 00:10:06.264 }, 00:10:06.264 { 00:10:06.264 "name": "BaseBdev3", 00:10:06.264 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:06.264 "is_configured": true, 00:10:06.264 "data_offset": 2048, 00:10:06.264 "data_size": 63488 00:10:06.264 }, 00:10:06.264 { 00:10:06.264 "name": "BaseBdev4", 00:10:06.264 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:06.264 "is_configured": true, 00:10:06.264 "data_offset": 2048, 00:10:06.264 "data_size": 63488 00:10:06.264 } 00:10:06.264 ] 00:10:06.264 }' 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.264 18:50:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.833 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:06.833 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:06.833 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.833 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.833 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.833 18:50:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.833 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.833 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:06.833 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.833 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.833 [2024-11-16 18:50:50.010682] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.833 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.833 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.833 "name": "Existed_Raid", 00:10:06.833 "aliases": [ 00:10:06.833 "10d59ec1-71e2-4079-828c-4b7c2ed55006" 00:10:06.833 ], 00:10:06.833 "product_name": "Raid Volume", 00:10:06.833 "block_size": 512, 00:10:06.833 "num_blocks": 253952, 00:10:06.833 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:06.833 "assigned_rate_limits": { 00:10:06.833 "rw_ios_per_sec": 0, 00:10:06.833 "rw_mbytes_per_sec": 0, 00:10:06.833 "r_mbytes_per_sec": 0, 00:10:06.833 "w_mbytes_per_sec": 0 00:10:06.833 }, 00:10:06.833 "claimed": false, 00:10:06.833 "zoned": false, 00:10:06.833 "supported_io_types": { 00:10:06.833 "read": true, 00:10:06.833 "write": true, 00:10:06.833 "unmap": true, 00:10:06.833 "flush": true, 00:10:06.833 "reset": true, 00:10:06.833 "nvme_admin": false, 00:10:06.833 "nvme_io": false, 00:10:06.833 "nvme_io_md": false, 00:10:06.833 "write_zeroes": true, 00:10:06.833 "zcopy": false, 00:10:06.833 "get_zone_info": false, 00:10:06.833 "zone_management": false, 00:10:06.833 "zone_append": false, 00:10:06.833 "compare": false, 00:10:06.833 "compare_and_write": false, 00:10:06.833 "abort": false, 00:10:06.833 "seek_hole": false, 00:10:06.833 "seek_data": false, 00:10:06.833 "copy": false, 00:10:06.833 "nvme_iov_md": false 00:10:06.833 }, 00:10:06.833 "memory_domains": [ 00:10:06.833 { 00:10:06.833 "dma_device_id": "system", 00:10:06.833 "dma_device_type": 1 00:10:06.833 }, 00:10:06.833 { 00:10:06.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.833 "dma_device_type": 2 00:10:06.833 }, 00:10:06.833 { 00:10:06.833 "dma_device_id": "system", 00:10:06.833 "dma_device_type": 1 00:10:06.833 }, 00:10:06.833 { 00:10:06.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.833 "dma_device_type": 2 00:10:06.833 }, 00:10:06.833 { 00:10:06.833 "dma_device_id": "system", 00:10:06.833 "dma_device_type": 1 00:10:06.833 }, 00:10:06.833 { 00:10:06.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.833 "dma_device_type": 2 00:10:06.833 }, 00:10:06.833 { 00:10:06.833 "dma_device_id": "system", 00:10:06.833 "dma_device_type": 1 00:10:06.833 }, 00:10:06.833 { 00:10:06.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.833 "dma_device_type": 2 00:10:06.833 } 00:10:06.833 ], 00:10:06.833 "driver_specific": { 00:10:06.833 "raid": { 00:10:06.833 "uuid": "10d59ec1-71e2-4079-828c-4b7c2ed55006", 00:10:06.833 "strip_size_kb": 64, 00:10:06.833 "state": "online", 00:10:06.833 "raid_level": "raid0", 00:10:06.833 "superblock": true, 00:10:06.833 "num_base_bdevs": 4, 00:10:06.833 "num_base_bdevs_discovered": 4, 00:10:06.833 "num_base_bdevs_operational": 4, 00:10:06.833 "base_bdevs_list": [ 00:10:06.833 { 00:10:06.833 "name": "NewBaseBdev", 00:10:06.833 "uuid": "2d582817-a63f-4c32-aeda-7217e857f18e", 00:10:06.833 "is_configured": true, 00:10:06.833 "data_offset": 2048, 00:10:06.833 "data_size": 63488 00:10:06.833 }, 00:10:06.833 { 00:10:06.833 "name": "BaseBdev2", 00:10:06.833 "uuid": "5177af22-abee-4f69-bb0b-dfad1bb9a2e6", 00:10:06.833 "is_configured": true, 00:10:06.833 "data_offset": 2048, 00:10:06.833 "data_size": 63488 00:10:06.833 }, 00:10:06.833 { 00:10:06.833 "name": "BaseBdev3", 00:10:06.833 "uuid": "1330763d-77d9-4b5e-b028-aec911f170b6", 00:10:06.833 "is_configured": true, 00:10:06.833 "data_offset": 2048, 00:10:06.833 "data_size": 63488 00:10:06.834 }, 00:10:06.834 { 00:10:06.834 "name": "BaseBdev4", 00:10:06.834 "uuid": "d9f9f695-64b9-4a00-91c6-ceff5feebe4c", 00:10:06.834 "is_configured": true, 00:10:06.834 "data_offset": 2048, 00:10:06.834 "data_size": 63488 00:10:06.834 } 00:10:06.834 ] 00:10:06.834 } 00:10:06.834 } 00:10:06.834 }' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:06.834 BaseBdev2 00:10:06.834 BaseBdev3 00:10:06.834 BaseBdev4' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.834 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.095 [2024-11-16 18:50:50.325756] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.095 [2024-11-16 18:50:50.325825] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:07.095 [2024-11-16 18:50:50.325943] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:07.095 [2024-11-16 18:50:50.326045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:07.095 [2024-11-16 18:50:50.326095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69841 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69841 ']' 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69841 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69841 00:10:07.095 killing process with pid 69841 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69841' 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69841 00:10:07.095 [2024-11-16 18:50:50.371681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.095 18:50:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69841 00:10:07.405 [2024-11-16 18:50:50.758774] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:08.806 18:50:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:08.806 00:10:08.806 real 0m11.350s 00:10:08.806 user 0m18.037s 00:10:08.806 sys 0m2.054s 00:10:08.806 18:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.806 18:50:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.806 ************************************ 00:10:08.806 END TEST raid_state_function_test_sb 00:10:08.806 ************************************ 00:10:08.806 18:50:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:08.806 18:50:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:08.806 18:50:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.806 18:50:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:08.806 ************************************ 00:10:08.806 START TEST raid_superblock_test 00:10:08.806 ************************************ 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70511 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70511 00:10:08.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70511 ']' 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.806 18:50:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.806 [2024-11-16 18:50:52.002138] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:08.806 [2024-11-16 18:50:52.002815] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70511 ] 00:10:08.806 [2024-11-16 18:50:52.178573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.065 [2024-11-16 18:50:52.292551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.065 [2024-11-16 18:50:52.488782] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.065 [2024-11-16 18:50:52.488908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.634 malloc1 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.634 [2024-11-16 18:50:52.901330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:09.634 [2024-11-16 18:50:52.901395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.634 [2024-11-16 18:50:52.901419] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:09.634 [2024-11-16 18:50:52.901428] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.634 [2024-11-16 18:50:52.903501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.634 [2024-11-16 18:50:52.903542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:09.634 pt1 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.634 malloc2 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:09.634 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.635 [2024-11-16 18:50:52.956943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:09.635 [2024-11-16 18:50:52.957052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.635 [2024-11-16 18:50:52.957092] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:09.635 [2024-11-16 18:50:52.957120] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.635 [2024-11-16 18:50:52.959283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.635 [2024-11-16 18:50:52.959353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:09.635 pt2 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.635 18:50:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.635 malloc3 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.635 [2024-11-16 18:50:53.027870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:09.635 [2024-11-16 18:50:53.027965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.635 [2024-11-16 18:50:53.028005] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:09.635 [2024-11-16 18:50:53.028034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.635 [2024-11-16 18:50:53.030248] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.635 [2024-11-16 18:50:53.030332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:09.635 pt3 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.635 malloc4 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.635 [2024-11-16 18:50:53.081835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:09.635 [2024-11-16 18:50:53.081930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:09.635 [2024-11-16 18:50:53.081967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:09.635 [2024-11-16 18:50:53.081996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:09.635 [2024-11-16 18:50:53.084122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:09.635 [2024-11-16 18:50:53.084194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:09.635 pt4 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.635 [2024-11-16 18:50:53.093858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:09.635 [2024-11-16 18:50:53.095680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:09.635 [2024-11-16 18:50:53.095778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:09.635 [2024-11-16 18:50:53.095868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:09.635 [2024-11-16 18:50:53.096152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:09.635 [2024-11-16 18:50:53.096218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:09.635 [2024-11-16 18:50:53.096519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:09.635 [2024-11-16 18:50:53.096761] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:09.635 [2024-11-16 18:50:53.096816] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:09.635 [2024-11-16 18:50:53.097042] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.635 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.895 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.895 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.895 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.895 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.895 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.895 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.895 "name": "raid_bdev1", 00:10:09.895 "uuid": "48346456-b66b-41ab-9720-c0b8a336f402", 00:10:09.895 "strip_size_kb": 64, 00:10:09.895 "state": "online", 00:10:09.895 "raid_level": "raid0", 00:10:09.895 "superblock": true, 00:10:09.895 "num_base_bdevs": 4, 00:10:09.895 "num_base_bdevs_discovered": 4, 00:10:09.895 "num_base_bdevs_operational": 4, 00:10:09.895 "base_bdevs_list": [ 00:10:09.895 { 00:10:09.895 "name": "pt1", 00:10:09.895 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.895 "is_configured": true, 00:10:09.895 "data_offset": 2048, 00:10:09.895 "data_size": 63488 00:10:09.895 }, 00:10:09.895 { 00:10:09.895 "name": "pt2", 00:10:09.895 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.895 "is_configured": true, 00:10:09.895 "data_offset": 2048, 00:10:09.895 "data_size": 63488 00:10:09.895 }, 00:10:09.895 { 00:10:09.895 "name": "pt3", 00:10:09.895 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.895 "is_configured": true, 00:10:09.895 "data_offset": 2048, 00:10:09.895 "data_size": 63488 00:10:09.895 }, 00:10:09.895 { 00:10:09.895 "name": "pt4", 00:10:09.895 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:09.895 "is_configured": true, 00:10:09.895 "data_offset": 2048, 00:10:09.895 "data_size": 63488 00:10:09.895 } 00:10:09.895 ] 00:10:09.895 }' 00:10:09.895 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.895 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.154 [2024-11-16 18:50:53.585337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.154 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:10.154 "name": "raid_bdev1", 00:10:10.154 "aliases": [ 00:10:10.154 "48346456-b66b-41ab-9720-c0b8a336f402" 00:10:10.154 ], 00:10:10.154 "product_name": "Raid Volume", 00:10:10.154 "block_size": 512, 00:10:10.154 "num_blocks": 253952, 00:10:10.154 "uuid": "48346456-b66b-41ab-9720-c0b8a336f402", 00:10:10.154 "assigned_rate_limits": { 00:10:10.154 "rw_ios_per_sec": 0, 00:10:10.154 "rw_mbytes_per_sec": 0, 00:10:10.154 "r_mbytes_per_sec": 0, 00:10:10.154 "w_mbytes_per_sec": 0 00:10:10.154 }, 00:10:10.154 "claimed": false, 00:10:10.154 "zoned": false, 00:10:10.154 "supported_io_types": { 00:10:10.154 "read": true, 00:10:10.154 "write": true, 00:10:10.154 "unmap": true, 00:10:10.154 "flush": true, 00:10:10.154 "reset": true, 00:10:10.154 "nvme_admin": false, 00:10:10.154 "nvme_io": false, 00:10:10.154 "nvme_io_md": false, 00:10:10.154 "write_zeroes": true, 00:10:10.154 "zcopy": false, 00:10:10.154 "get_zone_info": false, 00:10:10.154 "zone_management": false, 00:10:10.154 "zone_append": false, 00:10:10.154 "compare": false, 00:10:10.154 "compare_and_write": false, 00:10:10.154 "abort": false, 00:10:10.154 "seek_hole": false, 00:10:10.154 "seek_data": false, 00:10:10.154 "copy": false, 00:10:10.154 "nvme_iov_md": false 00:10:10.154 }, 00:10:10.154 "memory_domains": [ 00:10:10.154 { 00:10:10.154 "dma_device_id": "system", 00:10:10.154 "dma_device_type": 1 00:10:10.154 }, 00:10:10.154 { 00:10:10.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.154 "dma_device_type": 2 00:10:10.154 }, 00:10:10.154 { 00:10:10.154 "dma_device_id": "system", 00:10:10.154 "dma_device_type": 1 00:10:10.154 }, 00:10:10.154 { 00:10:10.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.154 "dma_device_type": 2 00:10:10.154 }, 00:10:10.154 { 00:10:10.154 "dma_device_id": "system", 00:10:10.154 "dma_device_type": 1 00:10:10.154 }, 00:10:10.154 { 00:10:10.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.154 "dma_device_type": 2 00:10:10.154 }, 00:10:10.154 { 00:10:10.154 "dma_device_id": "system", 00:10:10.154 "dma_device_type": 1 00:10:10.154 }, 00:10:10.155 { 00:10:10.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.155 "dma_device_type": 2 00:10:10.155 } 00:10:10.155 ], 00:10:10.155 "driver_specific": { 00:10:10.155 "raid": { 00:10:10.155 "uuid": "48346456-b66b-41ab-9720-c0b8a336f402", 00:10:10.155 "strip_size_kb": 64, 00:10:10.155 "state": "online", 00:10:10.155 "raid_level": "raid0", 00:10:10.155 "superblock": true, 00:10:10.155 "num_base_bdevs": 4, 00:10:10.155 "num_base_bdevs_discovered": 4, 00:10:10.155 "num_base_bdevs_operational": 4, 00:10:10.155 "base_bdevs_list": [ 00:10:10.155 { 00:10:10.155 "name": "pt1", 00:10:10.155 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.155 "is_configured": true, 00:10:10.155 "data_offset": 2048, 00:10:10.155 "data_size": 63488 00:10:10.155 }, 00:10:10.155 { 00:10:10.155 "name": "pt2", 00:10:10.155 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.155 "is_configured": true, 00:10:10.155 "data_offset": 2048, 00:10:10.155 "data_size": 63488 00:10:10.155 }, 00:10:10.155 { 00:10:10.155 "name": "pt3", 00:10:10.155 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.155 "is_configured": true, 00:10:10.155 "data_offset": 2048, 00:10:10.155 "data_size": 63488 00:10:10.155 }, 00:10:10.155 { 00:10:10.155 "name": "pt4", 00:10:10.155 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:10.155 "is_configured": true, 00:10:10.155 "data_offset": 2048, 00:10:10.155 "data_size": 63488 00:10:10.155 } 00:10:10.155 ] 00:10:10.155 } 00:10:10.155 } 00:10:10.155 }' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:10.415 pt2 00:10:10.415 pt3 00:10:10.415 pt4' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.415 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.675 [2024-11-16 18:50:53.912720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=48346456-b66b-41ab-9720-c0b8a336f402 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 48346456-b66b-41ab-9720-c0b8a336f402 ']' 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.675 [2024-11-16 18:50:53.956349] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.675 [2024-11-16 18:50:53.956426] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.675 [2024-11-16 18:50:53.956542] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.675 [2024-11-16 18:50:53.956636] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.675 [2024-11-16 18:50:53.956727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.675 18:50:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.675 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:10.675 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:10.675 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.675 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.676 [2024-11-16 18:50:54.104140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:10.676 [2024-11-16 18:50:54.106160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:10.676 [2024-11-16 18:50:54.106253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:10.676 [2024-11-16 18:50:54.106307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:10.676 [2024-11-16 18:50:54.106388] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:10.676 [2024-11-16 18:50:54.106502] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:10.676 [2024-11-16 18:50:54.106566] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:10.676 [2024-11-16 18:50:54.106635] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:10.676 [2024-11-16 18:50:54.106702] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.676 [2024-11-16 18:50:54.106745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:10.676 request: 00:10:10.676 { 00:10:10.676 "name": "raid_bdev1", 00:10:10.676 "raid_level": "raid0", 00:10:10.676 "base_bdevs": [ 00:10:10.676 "malloc1", 00:10:10.676 "malloc2", 00:10:10.676 "malloc3", 00:10:10.676 "malloc4" 00:10:10.676 ], 00:10:10.676 "strip_size_kb": 64, 00:10:10.676 "superblock": false, 00:10:10.676 "method": "bdev_raid_create", 00:10:10.676 "req_id": 1 00:10:10.676 } 00:10:10.676 Got JSON-RPC error response 00:10:10.676 response: 00:10:10.676 { 00:10:10.676 "code": -17, 00:10:10.676 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:10.676 } 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:10.676 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.935 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:10.935 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:10.935 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:10.935 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.936 [2024-11-16 18:50:54.168003] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:10.936 [2024-11-16 18:50:54.168112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:10.936 [2024-11-16 18:50:54.168146] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:10.936 [2024-11-16 18:50:54.168177] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:10.936 [2024-11-16 18:50:54.170485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:10.936 [2024-11-16 18:50:54.170566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:10.936 [2024-11-16 18:50:54.170701] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:10.936 [2024-11-16 18:50:54.170803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:10.936 pt1 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.936 "name": "raid_bdev1", 00:10:10.936 "uuid": "48346456-b66b-41ab-9720-c0b8a336f402", 00:10:10.936 "strip_size_kb": 64, 00:10:10.936 "state": "configuring", 00:10:10.936 "raid_level": "raid0", 00:10:10.936 "superblock": true, 00:10:10.936 "num_base_bdevs": 4, 00:10:10.936 "num_base_bdevs_discovered": 1, 00:10:10.936 "num_base_bdevs_operational": 4, 00:10:10.936 "base_bdevs_list": [ 00:10:10.936 { 00:10:10.936 "name": "pt1", 00:10:10.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:10.936 "is_configured": true, 00:10:10.936 "data_offset": 2048, 00:10:10.936 "data_size": 63488 00:10:10.936 }, 00:10:10.936 { 00:10:10.936 "name": null, 00:10:10.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.936 "is_configured": false, 00:10:10.936 "data_offset": 2048, 00:10:10.936 "data_size": 63488 00:10:10.936 }, 00:10:10.936 { 00:10:10.936 "name": null, 00:10:10.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.936 "is_configured": false, 00:10:10.936 "data_offset": 2048, 00:10:10.936 "data_size": 63488 00:10:10.936 }, 00:10:10.936 { 00:10:10.936 "name": null, 00:10:10.936 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:10.936 "is_configured": false, 00:10:10.936 "data_offset": 2048, 00:10:10.936 "data_size": 63488 00:10:10.936 } 00:10:10.936 ] 00:10:10.936 }' 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.936 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.196 [2024-11-16 18:50:54.591324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.196 [2024-11-16 18:50:54.591445] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.196 [2024-11-16 18:50:54.591469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:11.196 [2024-11-16 18:50:54.591480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.196 [2024-11-16 18:50:54.591964] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.196 [2024-11-16 18:50:54.591993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.196 [2024-11-16 18:50:54.592077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.196 [2024-11-16 18:50:54.592107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.196 pt2 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.196 [2024-11-16 18:50:54.599310] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.196 "name": "raid_bdev1", 00:10:11.196 "uuid": "48346456-b66b-41ab-9720-c0b8a336f402", 00:10:11.196 "strip_size_kb": 64, 00:10:11.196 "state": "configuring", 00:10:11.196 "raid_level": "raid0", 00:10:11.196 "superblock": true, 00:10:11.196 "num_base_bdevs": 4, 00:10:11.196 "num_base_bdevs_discovered": 1, 00:10:11.196 "num_base_bdevs_operational": 4, 00:10:11.196 "base_bdevs_list": [ 00:10:11.196 { 00:10:11.196 "name": "pt1", 00:10:11.196 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.196 "is_configured": true, 00:10:11.196 "data_offset": 2048, 00:10:11.196 "data_size": 63488 00:10:11.196 }, 00:10:11.196 { 00:10:11.196 "name": null, 00:10:11.196 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.196 "is_configured": false, 00:10:11.196 "data_offset": 0, 00:10:11.196 "data_size": 63488 00:10:11.196 }, 00:10:11.196 { 00:10:11.196 "name": null, 00:10:11.196 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.196 "is_configured": false, 00:10:11.196 "data_offset": 2048, 00:10:11.196 "data_size": 63488 00:10:11.196 }, 00:10:11.196 { 00:10:11.196 "name": null, 00:10:11.196 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:11.196 "is_configured": false, 00:10:11.196 "data_offset": 2048, 00:10:11.196 "data_size": 63488 00:10:11.196 } 00:10:11.196 ] 00:10:11.196 }' 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.196 18:50:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.767 [2024-11-16 18:50:55.082490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:11.767 [2024-11-16 18:50:55.082609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.767 [2024-11-16 18:50:55.082658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:11.767 [2024-11-16 18:50:55.082689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.767 [2024-11-16 18:50:55.083162] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.767 [2024-11-16 18:50:55.083221] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:11.767 [2024-11-16 18:50:55.083313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:11.767 [2024-11-16 18:50:55.083336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:11.767 pt2 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.767 [2024-11-16 18:50:55.094432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:11.767 [2024-11-16 18:50:55.094480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.767 [2024-11-16 18:50:55.094520] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:11.767 [2024-11-16 18:50:55.094530] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.767 [2024-11-16 18:50:55.094919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.767 [2024-11-16 18:50:55.094941] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:11.767 [2024-11-16 18:50:55.095006] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:11.767 [2024-11-16 18:50:55.095024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:11.767 pt3 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.767 [2024-11-16 18:50:55.106385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:11.767 [2024-11-16 18:50:55.106434] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.767 [2024-11-16 18:50:55.106451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:11.767 [2024-11-16 18:50:55.106458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.767 [2024-11-16 18:50:55.106832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.767 [2024-11-16 18:50:55.106848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:11.767 [2024-11-16 18:50:55.106904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:11.767 [2024-11-16 18:50:55.106922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:11.767 [2024-11-16 18:50:55.107044] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:11.767 [2024-11-16 18:50:55.107058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:11.767 [2024-11-16 18:50:55.107280] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:11.767 [2024-11-16 18:50:55.107411] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:11.767 [2024-11-16 18:50:55.107424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:11.767 [2024-11-16 18:50:55.107546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.767 pt4 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.767 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.768 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.768 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.768 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.768 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.768 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.768 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.768 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.768 "name": "raid_bdev1", 00:10:11.768 "uuid": "48346456-b66b-41ab-9720-c0b8a336f402", 00:10:11.768 "strip_size_kb": 64, 00:10:11.768 "state": "online", 00:10:11.768 "raid_level": "raid0", 00:10:11.768 "superblock": true, 00:10:11.768 "num_base_bdevs": 4, 00:10:11.768 "num_base_bdevs_discovered": 4, 00:10:11.768 "num_base_bdevs_operational": 4, 00:10:11.768 "base_bdevs_list": [ 00:10:11.768 { 00:10:11.768 "name": "pt1", 00:10:11.768 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:11.768 "is_configured": true, 00:10:11.768 "data_offset": 2048, 00:10:11.768 "data_size": 63488 00:10:11.768 }, 00:10:11.768 { 00:10:11.768 "name": "pt2", 00:10:11.768 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:11.768 "is_configured": true, 00:10:11.768 "data_offset": 2048, 00:10:11.768 "data_size": 63488 00:10:11.768 }, 00:10:11.768 { 00:10:11.768 "name": "pt3", 00:10:11.768 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:11.768 "is_configured": true, 00:10:11.768 "data_offset": 2048, 00:10:11.768 "data_size": 63488 00:10:11.768 }, 00:10:11.768 { 00:10:11.768 "name": "pt4", 00:10:11.768 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:11.768 "is_configured": true, 00:10:11.768 "data_offset": 2048, 00:10:11.768 "data_size": 63488 00:10:11.768 } 00:10:11.768 ] 00:10:11.768 }' 00:10:11.768 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.768 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:12.338 [2024-11-16 18:50:55.569994] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.338 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:12.338 "name": "raid_bdev1", 00:10:12.338 "aliases": [ 00:10:12.338 "48346456-b66b-41ab-9720-c0b8a336f402" 00:10:12.338 ], 00:10:12.338 "product_name": "Raid Volume", 00:10:12.338 "block_size": 512, 00:10:12.338 "num_blocks": 253952, 00:10:12.338 "uuid": "48346456-b66b-41ab-9720-c0b8a336f402", 00:10:12.338 "assigned_rate_limits": { 00:10:12.338 "rw_ios_per_sec": 0, 00:10:12.338 "rw_mbytes_per_sec": 0, 00:10:12.338 "r_mbytes_per_sec": 0, 00:10:12.338 "w_mbytes_per_sec": 0 00:10:12.338 }, 00:10:12.338 "claimed": false, 00:10:12.338 "zoned": false, 00:10:12.338 "supported_io_types": { 00:10:12.338 "read": true, 00:10:12.338 "write": true, 00:10:12.338 "unmap": true, 00:10:12.338 "flush": true, 00:10:12.338 "reset": true, 00:10:12.338 "nvme_admin": false, 00:10:12.338 "nvme_io": false, 00:10:12.338 "nvme_io_md": false, 00:10:12.338 "write_zeroes": true, 00:10:12.338 "zcopy": false, 00:10:12.338 "get_zone_info": false, 00:10:12.338 "zone_management": false, 00:10:12.338 "zone_append": false, 00:10:12.338 "compare": false, 00:10:12.339 "compare_and_write": false, 00:10:12.339 "abort": false, 00:10:12.339 "seek_hole": false, 00:10:12.339 "seek_data": false, 00:10:12.339 "copy": false, 00:10:12.339 "nvme_iov_md": false 00:10:12.339 }, 00:10:12.339 "memory_domains": [ 00:10:12.339 { 00:10:12.339 "dma_device_id": "system", 00:10:12.339 "dma_device_type": 1 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.339 "dma_device_type": 2 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "dma_device_id": "system", 00:10:12.339 "dma_device_type": 1 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.339 "dma_device_type": 2 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "dma_device_id": "system", 00:10:12.339 "dma_device_type": 1 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.339 "dma_device_type": 2 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "dma_device_id": "system", 00:10:12.339 "dma_device_type": 1 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.339 "dma_device_type": 2 00:10:12.339 } 00:10:12.339 ], 00:10:12.339 "driver_specific": { 00:10:12.339 "raid": { 00:10:12.339 "uuid": "48346456-b66b-41ab-9720-c0b8a336f402", 00:10:12.339 "strip_size_kb": 64, 00:10:12.339 "state": "online", 00:10:12.339 "raid_level": "raid0", 00:10:12.339 "superblock": true, 00:10:12.339 "num_base_bdevs": 4, 00:10:12.339 "num_base_bdevs_discovered": 4, 00:10:12.339 "num_base_bdevs_operational": 4, 00:10:12.339 "base_bdevs_list": [ 00:10:12.339 { 00:10:12.339 "name": "pt1", 00:10:12.339 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:12.339 "is_configured": true, 00:10:12.339 "data_offset": 2048, 00:10:12.339 "data_size": 63488 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "name": "pt2", 00:10:12.339 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:12.339 "is_configured": true, 00:10:12.339 "data_offset": 2048, 00:10:12.339 "data_size": 63488 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "name": "pt3", 00:10:12.339 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:12.339 "is_configured": true, 00:10:12.339 "data_offset": 2048, 00:10:12.339 "data_size": 63488 00:10:12.339 }, 00:10:12.339 { 00:10:12.339 "name": "pt4", 00:10:12.339 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:12.339 "is_configured": true, 00:10:12.339 "data_offset": 2048, 00:10:12.339 "data_size": 63488 00:10:12.339 } 00:10:12.339 ] 00:10:12.339 } 00:10:12.339 } 00:10:12.339 }' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:12.339 pt2 00:10:12.339 pt3 00:10:12.339 pt4' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:12.339 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.599 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:12.599 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:12.599 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:12.599 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.599 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.599 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:12.599 [2024-11-16 18:50:55.841418] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.599 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.599 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 48346456-b66b-41ab-9720-c0b8a336f402 '!=' 48346456-b66b-41ab-9720-c0b8a336f402 ']' 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70511 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70511 ']' 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70511 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70511 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70511' 00:10:12.600 killing process with pid 70511 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70511 00:10:12.600 [2024-11-16 18:50:55.913869] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.600 [2024-11-16 18:50:55.913996] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.600 18:50:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70511 00:10:12.600 [2024-11-16 18:50:55.914095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.600 [2024-11-16 18:50:55.914106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:12.860 [2024-11-16 18:50:56.299119] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:14.239 18:50:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:14.239 00:10:14.239 real 0m5.441s 00:10:14.239 user 0m7.826s 00:10:14.239 sys 0m0.941s 00:10:14.239 18:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.239 18:50:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.240 ************************************ 00:10:14.240 END TEST raid_superblock_test 00:10:14.240 ************************************ 00:10:14.240 18:50:57 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:10:14.240 18:50:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:14.240 18:50:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.240 18:50:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:14.240 ************************************ 00:10:14.240 START TEST raid_read_error_test 00:10:14.240 ************************************ 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9NJxAvD97y 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70770 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70770 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70770 ']' 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.240 18:50:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.240 [2024-11-16 18:50:57.536443] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:14.240 [2024-11-16 18:50:57.536636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70770 ] 00:10:14.499 [2024-11-16 18:50:57.711910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.499 [2024-11-16 18:50:57.824122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.759 [2024-11-16 18:50:58.011238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.759 [2024-11-16 18:50:58.011378] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.019 BaseBdev1_malloc 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.019 true 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.019 [2024-11-16 18:50:58.380899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:15.019 [2024-11-16 18:50:58.381000] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.019 [2024-11-16 18:50:58.381023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:15.019 [2024-11-16 18:50:58.381033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.019 [2024-11-16 18:50:58.383099] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.019 [2024-11-16 18:50:58.383153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:15.019 BaseBdev1 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.019 BaseBdev2_malloc 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.019 true 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.019 [2024-11-16 18:50:58.449335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:15.019 [2024-11-16 18:50:58.449396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.019 [2024-11-16 18:50:58.449413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:15.019 [2024-11-16 18:50:58.449440] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.019 [2024-11-16 18:50:58.451641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.019 [2024-11-16 18:50:58.451713] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:15.019 BaseBdev2 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.019 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.020 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:15.020 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.020 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.279 BaseBdev3_malloc 00:10:15.279 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.279 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:15.279 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.280 true 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.280 [2024-11-16 18:50:58.535296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:15.280 [2024-11-16 18:50:58.535379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.280 [2024-11-16 18:50:58.535395] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:15.280 [2024-11-16 18:50:58.535405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.280 [2024-11-16 18:50:58.537497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.280 [2024-11-16 18:50:58.537535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:15.280 BaseBdev3 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.280 BaseBdev4_malloc 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.280 true 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.280 [2024-11-16 18:50:58.602858] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:15.280 [2024-11-16 18:50:58.602911] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:15.280 [2024-11-16 18:50:58.602944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:15.280 [2024-11-16 18:50:58.602954] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:15.280 [2024-11-16 18:50:58.604980] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:15.280 [2024-11-16 18:50:58.605096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:15.280 BaseBdev4 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.280 [2024-11-16 18:50:58.614895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.280 [2024-11-16 18:50:58.616748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:15.280 [2024-11-16 18:50:58.616822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.280 [2024-11-16 18:50:58.616887] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:15.280 [2024-11-16 18:50:58.617114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:15.280 [2024-11-16 18:50:58.617129] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:15.280 [2024-11-16 18:50:58.617373] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:15.280 [2024-11-16 18:50:58.617517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:15.280 [2024-11-16 18:50:58.617527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:15.280 [2024-11-16 18:50:58.617696] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.280 "name": "raid_bdev1", 00:10:15.280 "uuid": "7a87bfb1-e2b8-4aa9-863f-c4206ae40304", 00:10:15.280 "strip_size_kb": 64, 00:10:15.280 "state": "online", 00:10:15.280 "raid_level": "raid0", 00:10:15.280 "superblock": true, 00:10:15.280 "num_base_bdevs": 4, 00:10:15.280 "num_base_bdevs_discovered": 4, 00:10:15.280 "num_base_bdevs_operational": 4, 00:10:15.280 "base_bdevs_list": [ 00:10:15.280 { 00:10:15.280 "name": "BaseBdev1", 00:10:15.280 "uuid": "247a6359-89cc-5737-afc6-791ad665dde3", 00:10:15.280 "is_configured": true, 00:10:15.280 "data_offset": 2048, 00:10:15.280 "data_size": 63488 00:10:15.280 }, 00:10:15.280 { 00:10:15.280 "name": "BaseBdev2", 00:10:15.280 "uuid": "df239028-7e0f-5c58-8376-1d7a298ffa09", 00:10:15.280 "is_configured": true, 00:10:15.280 "data_offset": 2048, 00:10:15.280 "data_size": 63488 00:10:15.280 }, 00:10:15.280 { 00:10:15.280 "name": "BaseBdev3", 00:10:15.280 "uuid": "bd949c4d-dcc3-5319-8c25-965dc107b571", 00:10:15.280 "is_configured": true, 00:10:15.280 "data_offset": 2048, 00:10:15.280 "data_size": 63488 00:10:15.280 }, 00:10:15.280 { 00:10:15.280 "name": "BaseBdev4", 00:10:15.280 "uuid": "900d6dd0-4d62-577b-9f92-71e923fff6b8", 00:10:15.280 "is_configured": true, 00:10:15.280 "data_offset": 2048, 00:10:15.280 "data_size": 63488 00:10:15.280 } 00:10:15.280 ] 00:10:15.280 }' 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.280 18:50:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.862 18:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:15.862 18:50:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:15.862 [2024-11-16 18:50:59.107426] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.805 "name": "raid_bdev1", 00:10:16.805 "uuid": "7a87bfb1-e2b8-4aa9-863f-c4206ae40304", 00:10:16.805 "strip_size_kb": 64, 00:10:16.805 "state": "online", 00:10:16.805 "raid_level": "raid0", 00:10:16.805 "superblock": true, 00:10:16.805 "num_base_bdevs": 4, 00:10:16.805 "num_base_bdevs_discovered": 4, 00:10:16.805 "num_base_bdevs_operational": 4, 00:10:16.805 "base_bdevs_list": [ 00:10:16.805 { 00:10:16.805 "name": "BaseBdev1", 00:10:16.805 "uuid": "247a6359-89cc-5737-afc6-791ad665dde3", 00:10:16.805 "is_configured": true, 00:10:16.805 "data_offset": 2048, 00:10:16.805 "data_size": 63488 00:10:16.805 }, 00:10:16.805 { 00:10:16.805 "name": "BaseBdev2", 00:10:16.805 "uuid": "df239028-7e0f-5c58-8376-1d7a298ffa09", 00:10:16.805 "is_configured": true, 00:10:16.805 "data_offset": 2048, 00:10:16.805 "data_size": 63488 00:10:16.805 }, 00:10:16.805 { 00:10:16.805 "name": "BaseBdev3", 00:10:16.805 "uuid": "bd949c4d-dcc3-5319-8c25-965dc107b571", 00:10:16.805 "is_configured": true, 00:10:16.805 "data_offset": 2048, 00:10:16.805 "data_size": 63488 00:10:16.805 }, 00:10:16.805 { 00:10:16.805 "name": "BaseBdev4", 00:10:16.805 "uuid": "900d6dd0-4d62-577b-9f92-71e923fff6b8", 00:10:16.805 "is_configured": true, 00:10:16.805 "data_offset": 2048, 00:10:16.805 "data_size": 63488 00:10:16.805 } 00:10:16.805 ] 00:10:16.805 }' 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.805 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.065 [2024-11-16 18:51:00.470991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.065 [2024-11-16 18:51:00.471026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.065 [2024-11-16 18:51:00.473869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.065 [2024-11-16 18:51:00.473960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.065 [2024-11-16 18:51:00.474041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.065 [2024-11-16 18:51:00.474094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:17.065 { 00:10:17.065 "results": [ 00:10:17.065 { 00:10:17.065 "job": "raid_bdev1", 00:10:17.065 "core_mask": "0x1", 00:10:17.065 "workload": "randrw", 00:10:17.065 "percentage": 50, 00:10:17.065 "status": "finished", 00:10:17.065 "queue_depth": 1, 00:10:17.065 "io_size": 131072, 00:10:17.065 "runtime": 1.364359, 00:10:17.065 "iops": 16250.121852093182, 00:10:17.065 "mibps": 2031.2652315116477, 00:10:17.065 "io_failed": 1, 00:10:17.065 "io_timeout": 0, 00:10:17.065 "avg_latency_us": 85.66970450160595, 00:10:17.065 "min_latency_us": 25.041048034934498, 00:10:17.065 "max_latency_us": 1430.9170305676855 00:10:17.065 } 00:10:17.065 ], 00:10:17.065 "core_count": 1 00:10:17.065 } 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70770 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70770 ']' 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70770 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70770 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70770' 00:10:17.065 killing process with pid 70770 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70770 00:10:17.065 [2024-11-16 18:51:00.522706] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.065 18:51:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70770 00:10:17.635 [2024-11-16 18:51:00.836196] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9NJxAvD97y 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:10:18.575 00:10:18.575 real 0m4.544s 00:10:18.575 user 0m5.323s 00:10:18.575 sys 0m0.557s 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.575 ************************************ 00:10:18.575 END TEST raid_read_error_test 00:10:18.575 ************************************ 00:10:18.575 18:51:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.575 18:51:02 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:10:18.575 18:51:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:18.575 18:51:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.575 18:51:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:18.575 ************************************ 00:10:18.575 START TEST raid_write_error_test 00:10:18.575 ************************************ 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:18.575 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DqVN1EBrUe 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70917 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70917 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70917 ']' 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.835 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.835 [2024-11-16 18:51:02.145366] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:18.835 [2024-11-16 18:51:02.145564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70917 ] 00:10:19.095 [2024-11-16 18:51:02.319978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.095 [2024-11-16 18:51:02.434824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.355 [2024-11-16 18:51:02.642552] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.355 [2024-11-16 18:51:02.642643] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:19.615 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.615 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:19.615 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.615 18:51:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:19.615 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.615 18:51:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.615 BaseBdev1_malloc 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.615 true 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.615 [2024-11-16 18:51:03.027151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:19.615 [2024-11-16 18:51:03.027207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.615 [2024-11-16 18:51:03.027227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:19.615 [2024-11-16 18:51:03.027238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.615 [2024-11-16 18:51:03.029288] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.615 [2024-11-16 18:51:03.029369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:19.615 BaseBdev1 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.615 BaseBdev2_malloc 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.615 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 true 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 [2024-11-16 18:51:03.092334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:19.875 [2024-11-16 18:51:03.092388] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.875 [2024-11-16 18:51:03.092420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:19.875 [2024-11-16 18:51:03.092430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.875 [2024-11-16 18:51:03.094444] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.875 [2024-11-16 18:51:03.094483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:19.875 BaseBdev2 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 BaseBdev3_malloc 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 true 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.875 [2024-11-16 18:51:03.171195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:19.875 [2024-11-16 18:51:03.171246] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.875 [2024-11-16 18:51:03.171277] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:19.875 [2024-11-16 18:51:03.171288] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.875 [2024-11-16 18:51:03.173322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.875 [2024-11-16 18:51:03.173363] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:19.875 BaseBdev3 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:19.875 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.876 BaseBdev4_malloc 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.876 true 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.876 [2024-11-16 18:51:03.237117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:19.876 [2024-11-16 18:51:03.237166] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.876 [2024-11-16 18:51:03.237183] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:19.876 [2024-11-16 18:51:03.237193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.876 [2024-11-16 18:51:03.239250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.876 [2024-11-16 18:51:03.239333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:19.876 BaseBdev4 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.876 [2024-11-16 18:51:03.249127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:19.876 [2024-11-16 18:51:03.250970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:19.876 [2024-11-16 18:51:03.251041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:19.876 [2024-11-16 18:51:03.251109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:19.876 [2024-11-16 18:51:03.251326] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:19.876 [2024-11-16 18:51:03.251344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:19.876 [2024-11-16 18:51:03.251583] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:19.876 [2024-11-16 18:51:03.251745] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:19.876 [2024-11-16 18:51:03.251756] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:19.876 [2024-11-16 18:51:03.251919] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.876 "name": "raid_bdev1", 00:10:19.876 "uuid": "a5c43856-33a9-45b7-80a3-dc41e601d8cf", 00:10:19.876 "strip_size_kb": 64, 00:10:19.876 "state": "online", 00:10:19.876 "raid_level": "raid0", 00:10:19.876 "superblock": true, 00:10:19.876 "num_base_bdevs": 4, 00:10:19.876 "num_base_bdevs_discovered": 4, 00:10:19.876 "num_base_bdevs_operational": 4, 00:10:19.876 "base_bdevs_list": [ 00:10:19.876 { 00:10:19.876 "name": "BaseBdev1", 00:10:19.876 "uuid": "d6cef135-6c31-5461-9920-f2162888382a", 00:10:19.876 "is_configured": true, 00:10:19.876 "data_offset": 2048, 00:10:19.876 "data_size": 63488 00:10:19.876 }, 00:10:19.876 { 00:10:19.876 "name": "BaseBdev2", 00:10:19.876 "uuid": "0a7c1c75-14d4-5efa-8949-ff227846d0c3", 00:10:19.876 "is_configured": true, 00:10:19.876 "data_offset": 2048, 00:10:19.876 "data_size": 63488 00:10:19.876 }, 00:10:19.876 { 00:10:19.876 "name": "BaseBdev3", 00:10:19.876 "uuid": "7953e6fc-43d2-52f0-bc64-df82874e4fd6", 00:10:19.876 "is_configured": true, 00:10:19.876 "data_offset": 2048, 00:10:19.876 "data_size": 63488 00:10:19.876 }, 00:10:19.876 { 00:10:19.876 "name": "BaseBdev4", 00:10:19.876 "uuid": "0e9d93b6-7e17-518b-b0db-2311ff440324", 00:10:19.876 "is_configured": true, 00:10:19.876 "data_offset": 2048, 00:10:19.876 "data_size": 63488 00:10:19.876 } 00:10:19.876 ] 00:10:19.876 }' 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.876 18:51:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.445 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:20.445 18:51:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:20.445 [2024-11-16 18:51:03.757609] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.384 "name": "raid_bdev1", 00:10:21.384 "uuid": "a5c43856-33a9-45b7-80a3-dc41e601d8cf", 00:10:21.384 "strip_size_kb": 64, 00:10:21.384 "state": "online", 00:10:21.384 "raid_level": "raid0", 00:10:21.384 "superblock": true, 00:10:21.384 "num_base_bdevs": 4, 00:10:21.384 "num_base_bdevs_discovered": 4, 00:10:21.384 "num_base_bdevs_operational": 4, 00:10:21.384 "base_bdevs_list": [ 00:10:21.384 { 00:10:21.384 "name": "BaseBdev1", 00:10:21.384 "uuid": "d6cef135-6c31-5461-9920-f2162888382a", 00:10:21.384 "is_configured": true, 00:10:21.384 "data_offset": 2048, 00:10:21.384 "data_size": 63488 00:10:21.384 }, 00:10:21.384 { 00:10:21.384 "name": "BaseBdev2", 00:10:21.384 "uuid": "0a7c1c75-14d4-5efa-8949-ff227846d0c3", 00:10:21.384 "is_configured": true, 00:10:21.384 "data_offset": 2048, 00:10:21.384 "data_size": 63488 00:10:21.384 }, 00:10:21.384 { 00:10:21.384 "name": "BaseBdev3", 00:10:21.384 "uuid": "7953e6fc-43d2-52f0-bc64-df82874e4fd6", 00:10:21.384 "is_configured": true, 00:10:21.384 "data_offset": 2048, 00:10:21.384 "data_size": 63488 00:10:21.384 }, 00:10:21.384 { 00:10:21.384 "name": "BaseBdev4", 00:10:21.384 "uuid": "0e9d93b6-7e17-518b-b0db-2311ff440324", 00:10:21.384 "is_configured": true, 00:10:21.384 "data_offset": 2048, 00:10:21.384 "data_size": 63488 00:10:21.384 } 00:10:21.384 ] 00:10:21.384 }' 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.384 18:51:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.992 18:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.992 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.992 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.992 [2024-11-16 18:51:05.145396] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.992 [2024-11-16 18:51:05.145431] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.992 [2024-11-16 18:51:05.148228] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.992 [2024-11-16 18:51:05.148323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.992 [2024-11-16 18:51:05.148389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.992 [2024-11-16 18:51:05.148437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:21.992 { 00:10:21.992 "results": [ 00:10:21.992 { 00:10:21.992 "job": "raid_bdev1", 00:10:21.992 "core_mask": "0x1", 00:10:21.992 "workload": "randrw", 00:10:21.992 "percentage": 50, 00:10:21.992 "status": "finished", 00:10:21.992 "queue_depth": 1, 00:10:21.992 "io_size": 131072, 00:10:21.992 "runtime": 1.388654, 00:10:21.992 "iops": 16112.004862262305, 00:10:21.992 "mibps": 2014.0006077827882, 00:10:21.992 "io_failed": 1, 00:10:21.992 "io_timeout": 0, 00:10:21.992 "avg_latency_us": 86.39994191895781, 00:10:21.992 "min_latency_us": 25.4882096069869, 00:10:21.992 "max_latency_us": 1359.3711790393013 00:10:21.992 } 00:10:21.992 ], 00:10:21.992 "core_count": 1 00:10:21.992 } 00:10:21.992 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.992 18:51:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70917 00:10:21.992 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70917 ']' 00:10:21.992 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70917 00:10:21.992 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:21.993 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.993 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70917 00:10:21.993 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.993 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.993 killing process with pid 70917 00:10:21.993 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70917' 00:10:21.993 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70917 00:10:21.993 [2024-11-16 18:51:05.191266] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.993 18:51:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70917 00:10:22.256 [2024-11-16 18:51:05.510065] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.193 18:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DqVN1EBrUe 00:10:23.193 18:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:23.193 18:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:23.453 18:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:10:23.453 18:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:23.453 ************************************ 00:10:23.453 END TEST raid_write_error_test 00:10:23.453 ************************************ 00:10:23.453 18:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.453 18:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:23.453 18:51:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:10:23.453 00:10:23.453 real 0m4.639s 00:10:23.453 user 0m5.453s 00:10:23.453 sys 0m0.593s 00:10:23.453 18:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.453 18:51:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.453 18:51:06 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:23.453 18:51:06 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:10:23.453 18:51:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.453 18:51:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.453 18:51:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.453 ************************************ 00:10:23.453 START TEST raid_state_function_test 00:10:23.453 ************************************ 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:23.453 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71061 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71061' 00:10:23.454 Process raid pid: 71061 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71061 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71061 ']' 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.454 18:51:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.454 [2024-11-16 18:51:06.842982] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:23.454 [2024-11-16 18:51:06.843091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.714 [2024-11-16 18:51:07.014973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.714 [2024-11-16 18:51:07.123192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.975 [2024-11-16 18:51:07.336131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.975 [2024-11-16 18:51:07.336224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.234 [2024-11-16 18:51:07.669274] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.234 [2024-11-16 18:51:07.669335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.234 [2024-11-16 18:51:07.669345] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.234 [2024-11-16 18:51:07.669355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.234 [2024-11-16 18:51:07.669365] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.234 [2024-11-16 18:51:07.669374] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.234 [2024-11-16 18:51:07.669380] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.234 [2024-11-16 18:51:07.669388] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.234 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.493 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.493 "name": "Existed_Raid", 00:10:24.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.493 "strip_size_kb": 64, 00:10:24.493 "state": "configuring", 00:10:24.493 "raid_level": "concat", 00:10:24.493 "superblock": false, 00:10:24.493 "num_base_bdevs": 4, 00:10:24.493 "num_base_bdevs_discovered": 0, 00:10:24.493 "num_base_bdevs_operational": 4, 00:10:24.493 "base_bdevs_list": [ 00:10:24.493 { 00:10:24.493 "name": "BaseBdev1", 00:10:24.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.493 "is_configured": false, 00:10:24.493 "data_offset": 0, 00:10:24.493 "data_size": 0 00:10:24.493 }, 00:10:24.493 { 00:10:24.493 "name": "BaseBdev2", 00:10:24.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.493 "is_configured": false, 00:10:24.493 "data_offset": 0, 00:10:24.493 "data_size": 0 00:10:24.493 }, 00:10:24.493 { 00:10:24.493 "name": "BaseBdev3", 00:10:24.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.493 "is_configured": false, 00:10:24.493 "data_offset": 0, 00:10:24.493 "data_size": 0 00:10:24.493 }, 00:10:24.493 { 00:10:24.493 "name": "BaseBdev4", 00:10:24.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.493 "is_configured": false, 00:10:24.493 "data_offset": 0, 00:10:24.493 "data_size": 0 00:10:24.493 } 00:10:24.493 ] 00:10:24.493 }' 00:10:24.493 18:51:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.493 18:51:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.753 [2024-11-16 18:51:08.148496] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:24.753 [2024-11-16 18:51:08.148604] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.753 [2024-11-16 18:51:08.160454] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.753 [2024-11-16 18:51:08.160559] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.753 [2024-11-16 18:51:08.160590] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.753 [2024-11-16 18:51:08.160614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.753 [2024-11-16 18:51:08.160633] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:24.753 [2024-11-16 18:51:08.160671] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:24.753 [2024-11-16 18:51:08.160691] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:24.753 [2024-11-16 18:51:08.160712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.753 [2024-11-16 18:51:08.208605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.753 BaseBdev1 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.753 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.013 [ 00:10:25.013 { 00:10:25.013 "name": "BaseBdev1", 00:10:25.013 "aliases": [ 00:10:25.013 "cc165895-5032-48cb-800f-f4ba4f7f9c0d" 00:10:25.013 ], 00:10:25.013 "product_name": "Malloc disk", 00:10:25.013 "block_size": 512, 00:10:25.013 "num_blocks": 65536, 00:10:25.013 "uuid": "cc165895-5032-48cb-800f-f4ba4f7f9c0d", 00:10:25.013 "assigned_rate_limits": { 00:10:25.013 "rw_ios_per_sec": 0, 00:10:25.013 "rw_mbytes_per_sec": 0, 00:10:25.013 "r_mbytes_per_sec": 0, 00:10:25.013 "w_mbytes_per_sec": 0 00:10:25.013 }, 00:10:25.013 "claimed": true, 00:10:25.013 "claim_type": "exclusive_write", 00:10:25.013 "zoned": false, 00:10:25.013 "supported_io_types": { 00:10:25.013 "read": true, 00:10:25.013 "write": true, 00:10:25.013 "unmap": true, 00:10:25.013 "flush": true, 00:10:25.013 "reset": true, 00:10:25.013 "nvme_admin": false, 00:10:25.013 "nvme_io": false, 00:10:25.013 "nvme_io_md": false, 00:10:25.013 "write_zeroes": true, 00:10:25.013 "zcopy": true, 00:10:25.013 "get_zone_info": false, 00:10:25.013 "zone_management": false, 00:10:25.013 "zone_append": false, 00:10:25.013 "compare": false, 00:10:25.013 "compare_and_write": false, 00:10:25.013 "abort": true, 00:10:25.013 "seek_hole": false, 00:10:25.013 "seek_data": false, 00:10:25.013 "copy": true, 00:10:25.013 "nvme_iov_md": false 00:10:25.013 }, 00:10:25.013 "memory_domains": [ 00:10:25.013 { 00:10:25.013 "dma_device_id": "system", 00:10:25.013 "dma_device_type": 1 00:10:25.013 }, 00:10:25.013 { 00:10:25.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.013 "dma_device_type": 2 00:10:25.013 } 00:10:25.013 ], 00:10:25.013 "driver_specific": {} 00:10:25.013 } 00:10:25.013 ] 00:10:25.013 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.013 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:25.013 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.013 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.013 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.013 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.013 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.013 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.013 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.014 "name": "Existed_Raid", 00:10:25.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.014 "strip_size_kb": 64, 00:10:25.014 "state": "configuring", 00:10:25.014 "raid_level": "concat", 00:10:25.014 "superblock": false, 00:10:25.014 "num_base_bdevs": 4, 00:10:25.014 "num_base_bdevs_discovered": 1, 00:10:25.014 "num_base_bdevs_operational": 4, 00:10:25.014 "base_bdevs_list": [ 00:10:25.014 { 00:10:25.014 "name": "BaseBdev1", 00:10:25.014 "uuid": "cc165895-5032-48cb-800f-f4ba4f7f9c0d", 00:10:25.014 "is_configured": true, 00:10:25.014 "data_offset": 0, 00:10:25.014 "data_size": 65536 00:10:25.014 }, 00:10:25.014 { 00:10:25.014 "name": "BaseBdev2", 00:10:25.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.014 "is_configured": false, 00:10:25.014 "data_offset": 0, 00:10:25.014 "data_size": 0 00:10:25.014 }, 00:10:25.014 { 00:10:25.014 "name": "BaseBdev3", 00:10:25.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.014 "is_configured": false, 00:10:25.014 "data_offset": 0, 00:10:25.014 "data_size": 0 00:10:25.014 }, 00:10:25.014 { 00:10:25.014 "name": "BaseBdev4", 00:10:25.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.014 "is_configured": false, 00:10:25.014 "data_offset": 0, 00:10:25.014 "data_size": 0 00:10:25.014 } 00:10:25.014 ] 00:10:25.014 }' 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.014 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.274 [2024-11-16 18:51:08.715779] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:25.274 [2024-11-16 18:51:08.715893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.274 [2024-11-16 18:51:08.727822] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.274 [2024-11-16 18:51:08.729630] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.274 [2024-11-16 18:51:08.729686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.274 [2024-11-16 18:51:08.729697] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.274 [2024-11-16 18:51:08.729708] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.274 [2024-11-16 18:51:08.729715] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:25.274 [2024-11-16 18:51:08.729723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.274 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.533 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.533 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.533 "name": "Existed_Raid", 00:10:25.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.533 "strip_size_kb": 64, 00:10:25.533 "state": "configuring", 00:10:25.533 "raid_level": "concat", 00:10:25.533 "superblock": false, 00:10:25.533 "num_base_bdevs": 4, 00:10:25.533 "num_base_bdevs_discovered": 1, 00:10:25.533 "num_base_bdevs_operational": 4, 00:10:25.533 "base_bdevs_list": [ 00:10:25.533 { 00:10:25.533 "name": "BaseBdev1", 00:10:25.533 "uuid": "cc165895-5032-48cb-800f-f4ba4f7f9c0d", 00:10:25.533 "is_configured": true, 00:10:25.533 "data_offset": 0, 00:10:25.533 "data_size": 65536 00:10:25.533 }, 00:10:25.533 { 00:10:25.533 "name": "BaseBdev2", 00:10:25.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.533 "is_configured": false, 00:10:25.533 "data_offset": 0, 00:10:25.533 "data_size": 0 00:10:25.533 }, 00:10:25.533 { 00:10:25.533 "name": "BaseBdev3", 00:10:25.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.533 "is_configured": false, 00:10:25.533 "data_offset": 0, 00:10:25.533 "data_size": 0 00:10:25.533 }, 00:10:25.533 { 00:10:25.533 "name": "BaseBdev4", 00:10:25.533 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.533 "is_configured": false, 00:10:25.533 "data_offset": 0, 00:10:25.533 "data_size": 0 00:10:25.533 } 00:10:25.533 ] 00:10:25.533 }' 00:10:25.533 18:51:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.533 18:51:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.792 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.792 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.792 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.792 [2024-11-16 18:51:09.210217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.792 BaseBdev2 00:10:25.792 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.792 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:25.792 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:25.792 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.793 [ 00:10:25.793 { 00:10:25.793 "name": "BaseBdev2", 00:10:25.793 "aliases": [ 00:10:25.793 "54526210-3f44-4088-b516-a59da1a6d4b5" 00:10:25.793 ], 00:10:25.793 "product_name": "Malloc disk", 00:10:25.793 "block_size": 512, 00:10:25.793 "num_blocks": 65536, 00:10:25.793 "uuid": "54526210-3f44-4088-b516-a59da1a6d4b5", 00:10:25.793 "assigned_rate_limits": { 00:10:25.793 "rw_ios_per_sec": 0, 00:10:25.793 "rw_mbytes_per_sec": 0, 00:10:25.793 "r_mbytes_per_sec": 0, 00:10:25.793 "w_mbytes_per_sec": 0 00:10:25.793 }, 00:10:25.793 "claimed": true, 00:10:25.793 "claim_type": "exclusive_write", 00:10:25.793 "zoned": false, 00:10:25.793 "supported_io_types": { 00:10:25.793 "read": true, 00:10:25.793 "write": true, 00:10:25.793 "unmap": true, 00:10:25.793 "flush": true, 00:10:25.793 "reset": true, 00:10:25.793 "nvme_admin": false, 00:10:25.793 "nvme_io": false, 00:10:25.793 "nvme_io_md": false, 00:10:25.793 "write_zeroes": true, 00:10:25.793 "zcopy": true, 00:10:25.793 "get_zone_info": false, 00:10:25.793 "zone_management": false, 00:10:25.793 "zone_append": false, 00:10:25.793 "compare": false, 00:10:25.793 "compare_and_write": false, 00:10:25.793 "abort": true, 00:10:25.793 "seek_hole": false, 00:10:25.793 "seek_data": false, 00:10:25.793 "copy": true, 00:10:25.793 "nvme_iov_md": false 00:10:25.793 }, 00:10:25.793 "memory_domains": [ 00:10:25.793 { 00:10:25.793 "dma_device_id": "system", 00:10:25.793 "dma_device_type": 1 00:10:25.793 }, 00:10:25.793 { 00:10:25.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.793 "dma_device_type": 2 00:10:25.793 } 00:10:25.793 ], 00:10:25.793 "driver_specific": {} 00:10:25.793 } 00:10:25.793 ] 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.793 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.053 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.053 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.053 "name": "Existed_Raid", 00:10:26.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.053 "strip_size_kb": 64, 00:10:26.053 "state": "configuring", 00:10:26.053 "raid_level": "concat", 00:10:26.053 "superblock": false, 00:10:26.053 "num_base_bdevs": 4, 00:10:26.053 "num_base_bdevs_discovered": 2, 00:10:26.053 "num_base_bdevs_operational": 4, 00:10:26.053 "base_bdevs_list": [ 00:10:26.053 { 00:10:26.053 "name": "BaseBdev1", 00:10:26.053 "uuid": "cc165895-5032-48cb-800f-f4ba4f7f9c0d", 00:10:26.053 "is_configured": true, 00:10:26.053 "data_offset": 0, 00:10:26.053 "data_size": 65536 00:10:26.053 }, 00:10:26.053 { 00:10:26.053 "name": "BaseBdev2", 00:10:26.053 "uuid": "54526210-3f44-4088-b516-a59da1a6d4b5", 00:10:26.053 "is_configured": true, 00:10:26.053 "data_offset": 0, 00:10:26.053 "data_size": 65536 00:10:26.053 }, 00:10:26.053 { 00:10:26.053 "name": "BaseBdev3", 00:10:26.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.053 "is_configured": false, 00:10:26.053 "data_offset": 0, 00:10:26.053 "data_size": 0 00:10:26.053 }, 00:10:26.053 { 00:10:26.053 "name": "BaseBdev4", 00:10:26.053 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.053 "is_configured": false, 00:10:26.053 "data_offset": 0, 00:10:26.053 "data_size": 0 00:10:26.053 } 00:10:26.053 ] 00:10:26.053 }' 00:10:26.053 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.053 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.312 [2024-11-16 18:51:09.761255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.312 BaseBdev3 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.312 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.571 [ 00:10:26.572 { 00:10:26.572 "name": "BaseBdev3", 00:10:26.572 "aliases": [ 00:10:26.572 "53977081-41f2-494f-ab6e-430fb6f68a9c" 00:10:26.572 ], 00:10:26.572 "product_name": "Malloc disk", 00:10:26.572 "block_size": 512, 00:10:26.572 "num_blocks": 65536, 00:10:26.572 "uuid": "53977081-41f2-494f-ab6e-430fb6f68a9c", 00:10:26.572 "assigned_rate_limits": { 00:10:26.572 "rw_ios_per_sec": 0, 00:10:26.572 "rw_mbytes_per_sec": 0, 00:10:26.572 "r_mbytes_per_sec": 0, 00:10:26.572 "w_mbytes_per_sec": 0 00:10:26.572 }, 00:10:26.572 "claimed": true, 00:10:26.572 "claim_type": "exclusive_write", 00:10:26.572 "zoned": false, 00:10:26.572 "supported_io_types": { 00:10:26.572 "read": true, 00:10:26.572 "write": true, 00:10:26.572 "unmap": true, 00:10:26.572 "flush": true, 00:10:26.572 "reset": true, 00:10:26.572 "nvme_admin": false, 00:10:26.572 "nvme_io": false, 00:10:26.572 "nvme_io_md": false, 00:10:26.572 "write_zeroes": true, 00:10:26.572 "zcopy": true, 00:10:26.572 "get_zone_info": false, 00:10:26.572 "zone_management": false, 00:10:26.572 "zone_append": false, 00:10:26.572 "compare": false, 00:10:26.572 "compare_and_write": false, 00:10:26.572 "abort": true, 00:10:26.572 "seek_hole": false, 00:10:26.572 "seek_data": false, 00:10:26.572 "copy": true, 00:10:26.572 "nvme_iov_md": false 00:10:26.572 }, 00:10:26.572 "memory_domains": [ 00:10:26.572 { 00:10:26.572 "dma_device_id": "system", 00:10:26.572 "dma_device_type": 1 00:10:26.572 }, 00:10:26.572 { 00:10:26.572 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.572 "dma_device_type": 2 00:10:26.572 } 00:10:26.572 ], 00:10:26.572 "driver_specific": {} 00:10:26.572 } 00:10:26.572 ] 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.572 "name": "Existed_Raid", 00:10:26.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.572 "strip_size_kb": 64, 00:10:26.572 "state": "configuring", 00:10:26.572 "raid_level": "concat", 00:10:26.572 "superblock": false, 00:10:26.572 "num_base_bdevs": 4, 00:10:26.572 "num_base_bdevs_discovered": 3, 00:10:26.572 "num_base_bdevs_operational": 4, 00:10:26.572 "base_bdevs_list": [ 00:10:26.572 { 00:10:26.572 "name": "BaseBdev1", 00:10:26.572 "uuid": "cc165895-5032-48cb-800f-f4ba4f7f9c0d", 00:10:26.572 "is_configured": true, 00:10:26.572 "data_offset": 0, 00:10:26.572 "data_size": 65536 00:10:26.572 }, 00:10:26.572 { 00:10:26.572 "name": "BaseBdev2", 00:10:26.572 "uuid": "54526210-3f44-4088-b516-a59da1a6d4b5", 00:10:26.572 "is_configured": true, 00:10:26.572 "data_offset": 0, 00:10:26.572 "data_size": 65536 00:10:26.572 }, 00:10:26.572 { 00:10:26.572 "name": "BaseBdev3", 00:10:26.572 "uuid": "53977081-41f2-494f-ab6e-430fb6f68a9c", 00:10:26.572 "is_configured": true, 00:10:26.572 "data_offset": 0, 00:10:26.572 "data_size": 65536 00:10:26.572 }, 00:10:26.572 { 00:10:26.572 "name": "BaseBdev4", 00:10:26.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.572 "is_configured": false, 00:10:26.572 "data_offset": 0, 00:10:26.572 "data_size": 0 00:10:26.572 } 00:10:26.572 ] 00:10:26.572 }' 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.572 18:51:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.832 [2024-11-16 18:51:10.265506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:26.832 [2024-11-16 18:51:10.265553] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:26.832 [2024-11-16 18:51:10.265561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:26.832 [2024-11-16 18:51:10.265863] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:26.832 [2024-11-16 18:51:10.266023] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:26.832 [2024-11-16 18:51:10.266036] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:26.832 [2024-11-16 18:51:10.266308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.832 BaseBdev4 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.832 [ 00:10:26.832 { 00:10:26.832 "name": "BaseBdev4", 00:10:26.832 "aliases": [ 00:10:26.832 "33f05999-7aa7-4c83-8c5b-21945af1d1be" 00:10:26.832 ], 00:10:26.832 "product_name": "Malloc disk", 00:10:26.832 "block_size": 512, 00:10:26.832 "num_blocks": 65536, 00:10:26.832 "uuid": "33f05999-7aa7-4c83-8c5b-21945af1d1be", 00:10:26.832 "assigned_rate_limits": { 00:10:26.832 "rw_ios_per_sec": 0, 00:10:26.832 "rw_mbytes_per_sec": 0, 00:10:26.832 "r_mbytes_per_sec": 0, 00:10:26.832 "w_mbytes_per_sec": 0 00:10:26.832 }, 00:10:26.832 "claimed": true, 00:10:26.832 "claim_type": "exclusive_write", 00:10:26.832 "zoned": false, 00:10:26.832 "supported_io_types": { 00:10:26.832 "read": true, 00:10:26.832 "write": true, 00:10:26.832 "unmap": true, 00:10:26.832 "flush": true, 00:10:26.832 "reset": true, 00:10:26.832 "nvme_admin": false, 00:10:26.832 "nvme_io": false, 00:10:26.832 "nvme_io_md": false, 00:10:26.832 "write_zeroes": true, 00:10:26.832 "zcopy": true, 00:10:26.832 "get_zone_info": false, 00:10:26.832 "zone_management": false, 00:10:26.832 "zone_append": false, 00:10:26.832 "compare": false, 00:10:26.832 "compare_and_write": false, 00:10:26.832 "abort": true, 00:10:26.832 "seek_hole": false, 00:10:26.832 "seek_data": false, 00:10:26.832 "copy": true, 00:10:26.832 "nvme_iov_md": false 00:10:26.832 }, 00:10:26.832 "memory_domains": [ 00:10:26.832 { 00:10:26.832 "dma_device_id": "system", 00:10:26.832 "dma_device_type": 1 00:10:26.832 }, 00:10:26.832 { 00:10:26.832 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:26.832 "dma_device_type": 2 00:10:26.832 } 00:10:26.832 ], 00:10:26.832 "driver_specific": {} 00:10:26.832 } 00:10:26.832 ] 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.832 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.093 "name": "Existed_Raid", 00:10:27.093 "uuid": "65ba7af6-5beb-4eb4-b2eb-b8b5f0dcf232", 00:10:27.093 "strip_size_kb": 64, 00:10:27.093 "state": "online", 00:10:27.093 "raid_level": "concat", 00:10:27.093 "superblock": false, 00:10:27.093 "num_base_bdevs": 4, 00:10:27.093 "num_base_bdevs_discovered": 4, 00:10:27.093 "num_base_bdevs_operational": 4, 00:10:27.093 "base_bdevs_list": [ 00:10:27.093 { 00:10:27.093 "name": "BaseBdev1", 00:10:27.093 "uuid": "cc165895-5032-48cb-800f-f4ba4f7f9c0d", 00:10:27.093 "is_configured": true, 00:10:27.093 "data_offset": 0, 00:10:27.093 "data_size": 65536 00:10:27.093 }, 00:10:27.093 { 00:10:27.093 "name": "BaseBdev2", 00:10:27.093 "uuid": "54526210-3f44-4088-b516-a59da1a6d4b5", 00:10:27.093 "is_configured": true, 00:10:27.093 "data_offset": 0, 00:10:27.093 "data_size": 65536 00:10:27.093 }, 00:10:27.093 { 00:10:27.093 "name": "BaseBdev3", 00:10:27.093 "uuid": "53977081-41f2-494f-ab6e-430fb6f68a9c", 00:10:27.093 "is_configured": true, 00:10:27.093 "data_offset": 0, 00:10:27.093 "data_size": 65536 00:10:27.093 }, 00:10:27.093 { 00:10:27.093 "name": "BaseBdev4", 00:10:27.093 "uuid": "33f05999-7aa7-4c83-8c5b-21945af1d1be", 00:10:27.093 "is_configured": true, 00:10:27.093 "data_offset": 0, 00:10:27.093 "data_size": 65536 00:10:27.093 } 00:10:27.093 ] 00:10:27.093 }' 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.093 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.354 [2024-11-16 18:51:10.741099] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:27.354 "name": "Existed_Raid", 00:10:27.354 "aliases": [ 00:10:27.354 "65ba7af6-5beb-4eb4-b2eb-b8b5f0dcf232" 00:10:27.354 ], 00:10:27.354 "product_name": "Raid Volume", 00:10:27.354 "block_size": 512, 00:10:27.354 "num_blocks": 262144, 00:10:27.354 "uuid": "65ba7af6-5beb-4eb4-b2eb-b8b5f0dcf232", 00:10:27.354 "assigned_rate_limits": { 00:10:27.354 "rw_ios_per_sec": 0, 00:10:27.354 "rw_mbytes_per_sec": 0, 00:10:27.354 "r_mbytes_per_sec": 0, 00:10:27.354 "w_mbytes_per_sec": 0 00:10:27.354 }, 00:10:27.354 "claimed": false, 00:10:27.354 "zoned": false, 00:10:27.354 "supported_io_types": { 00:10:27.354 "read": true, 00:10:27.354 "write": true, 00:10:27.354 "unmap": true, 00:10:27.354 "flush": true, 00:10:27.354 "reset": true, 00:10:27.354 "nvme_admin": false, 00:10:27.354 "nvme_io": false, 00:10:27.354 "nvme_io_md": false, 00:10:27.354 "write_zeroes": true, 00:10:27.354 "zcopy": false, 00:10:27.354 "get_zone_info": false, 00:10:27.354 "zone_management": false, 00:10:27.354 "zone_append": false, 00:10:27.354 "compare": false, 00:10:27.354 "compare_and_write": false, 00:10:27.354 "abort": false, 00:10:27.354 "seek_hole": false, 00:10:27.354 "seek_data": false, 00:10:27.354 "copy": false, 00:10:27.354 "nvme_iov_md": false 00:10:27.354 }, 00:10:27.354 "memory_domains": [ 00:10:27.354 { 00:10:27.354 "dma_device_id": "system", 00:10:27.354 "dma_device_type": 1 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.354 "dma_device_type": 2 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "dma_device_id": "system", 00:10:27.354 "dma_device_type": 1 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.354 "dma_device_type": 2 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "dma_device_id": "system", 00:10:27.354 "dma_device_type": 1 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.354 "dma_device_type": 2 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "dma_device_id": "system", 00:10:27.354 "dma_device_type": 1 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.354 "dma_device_type": 2 00:10:27.354 } 00:10:27.354 ], 00:10:27.354 "driver_specific": { 00:10:27.354 "raid": { 00:10:27.354 "uuid": "65ba7af6-5beb-4eb4-b2eb-b8b5f0dcf232", 00:10:27.354 "strip_size_kb": 64, 00:10:27.354 "state": "online", 00:10:27.354 "raid_level": "concat", 00:10:27.354 "superblock": false, 00:10:27.354 "num_base_bdevs": 4, 00:10:27.354 "num_base_bdevs_discovered": 4, 00:10:27.354 "num_base_bdevs_operational": 4, 00:10:27.354 "base_bdevs_list": [ 00:10:27.354 { 00:10:27.354 "name": "BaseBdev1", 00:10:27.354 "uuid": "cc165895-5032-48cb-800f-f4ba4f7f9c0d", 00:10:27.354 "is_configured": true, 00:10:27.354 "data_offset": 0, 00:10:27.354 "data_size": 65536 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "name": "BaseBdev2", 00:10:27.354 "uuid": "54526210-3f44-4088-b516-a59da1a6d4b5", 00:10:27.354 "is_configured": true, 00:10:27.354 "data_offset": 0, 00:10:27.354 "data_size": 65536 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "name": "BaseBdev3", 00:10:27.354 "uuid": "53977081-41f2-494f-ab6e-430fb6f68a9c", 00:10:27.354 "is_configured": true, 00:10:27.354 "data_offset": 0, 00:10:27.354 "data_size": 65536 00:10:27.354 }, 00:10:27.354 { 00:10:27.354 "name": "BaseBdev4", 00:10:27.354 "uuid": "33f05999-7aa7-4c83-8c5b-21945af1d1be", 00:10:27.354 "is_configured": true, 00:10:27.354 "data_offset": 0, 00:10:27.354 "data_size": 65536 00:10:27.354 } 00:10:27.354 ] 00:10:27.354 } 00:10:27.354 } 00:10:27.354 }' 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:27.354 BaseBdev2 00:10:27.354 BaseBdev3 00:10:27.354 BaseBdev4' 00:10:27.354 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.614 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.615 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.615 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:27.615 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:27.615 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.615 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.615 18:51:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:27.615 18:51:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.615 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:27.615 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:27.615 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.615 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.615 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.615 [2024-11-16 18:51:11.032306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.615 [2024-11-16 18:51:11.032335] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.615 [2024-11-16 18:51:11.032383] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.875 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.875 "name": "Existed_Raid", 00:10:27.875 "uuid": "65ba7af6-5beb-4eb4-b2eb-b8b5f0dcf232", 00:10:27.875 "strip_size_kb": 64, 00:10:27.875 "state": "offline", 00:10:27.875 "raid_level": "concat", 00:10:27.875 "superblock": false, 00:10:27.875 "num_base_bdevs": 4, 00:10:27.875 "num_base_bdevs_discovered": 3, 00:10:27.875 "num_base_bdevs_operational": 3, 00:10:27.875 "base_bdevs_list": [ 00:10:27.875 { 00:10:27.875 "name": null, 00:10:27.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.875 "is_configured": false, 00:10:27.875 "data_offset": 0, 00:10:27.875 "data_size": 65536 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "name": "BaseBdev2", 00:10:27.875 "uuid": "54526210-3f44-4088-b516-a59da1a6d4b5", 00:10:27.875 "is_configured": true, 00:10:27.875 "data_offset": 0, 00:10:27.875 "data_size": 65536 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "name": "BaseBdev3", 00:10:27.875 "uuid": "53977081-41f2-494f-ab6e-430fb6f68a9c", 00:10:27.875 "is_configured": true, 00:10:27.875 "data_offset": 0, 00:10:27.875 "data_size": 65536 00:10:27.875 }, 00:10:27.875 { 00:10:27.875 "name": "BaseBdev4", 00:10:27.875 "uuid": "33f05999-7aa7-4c83-8c5b-21945af1d1be", 00:10:27.875 "is_configured": true, 00:10:27.876 "data_offset": 0, 00:10:27.876 "data_size": 65536 00:10:27.876 } 00:10:27.876 ] 00:10:27.876 }' 00:10:27.876 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.876 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.144 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.144 [2024-11-16 18:51:11.567388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.421 [2024-11-16 18:51:11.719030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.421 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.421 [2024-11-16 18:51:11.871881] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:28.421 [2024-11-16 18:51:11.871929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:28.681 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.681 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:28.681 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:28.682 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.682 18:51:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:28.682 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.682 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.682 18:51:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.682 BaseBdev2 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.682 [ 00:10:28.682 { 00:10:28.682 "name": "BaseBdev2", 00:10:28.682 "aliases": [ 00:10:28.682 "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07" 00:10:28.682 ], 00:10:28.682 "product_name": "Malloc disk", 00:10:28.682 "block_size": 512, 00:10:28.682 "num_blocks": 65536, 00:10:28.682 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:28.682 "assigned_rate_limits": { 00:10:28.682 "rw_ios_per_sec": 0, 00:10:28.682 "rw_mbytes_per_sec": 0, 00:10:28.682 "r_mbytes_per_sec": 0, 00:10:28.682 "w_mbytes_per_sec": 0 00:10:28.682 }, 00:10:28.682 "claimed": false, 00:10:28.682 "zoned": false, 00:10:28.682 "supported_io_types": { 00:10:28.682 "read": true, 00:10:28.682 "write": true, 00:10:28.682 "unmap": true, 00:10:28.682 "flush": true, 00:10:28.682 "reset": true, 00:10:28.682 "nvme_admin": false, 00:10:28.682 "nvme_io": false, 00:10:28.682 "nvme_io_md": false, 00:10:28.682 "write_zeroes": true, 00:10:28.682 "zcopy": true, 00:10:28.682 "get_zone_info": false, 00:10:28.682 "zone_management": false, 00:10:28.682 "zone_append": false, 00:10:28.682 "compare": false, 00:10:28.682 "compare_and_write": false, 00:10:28.682 "abort": true, 00:10:28.682 "seek_hole": false, 00:10:28.682 "seek_data": false, 00:10:28.682 "copy": true, 00:10:28.682 "nvme_iov_md": false 00:10:28.682 }, 00:10:28.682 "memory_domains": [ 00:10:28.682 { 00:10:28.682 "dma_device_id": "system", 00:10:28.682 "dma_device_type": 1 00:10:28.682 }, 00:10:28.682 { 00:10:28.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.682 "dma_device_type": 2 00:10:28.682 } 00:10:28.682 ], 00:10:28.682 "driver_specific": {} 00:10:28.682 } 00:10:28.682 ] 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.682 BaseBdev3 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.682 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.942 [ 00:10:28.942 { 00:10:28.942 "name": "BaseBdev3", 00:10:28.942 "aliases": [ 00:10:28.942 "f24208c0-0023-422a-bc3b-c17e65c7655d" 00:10:28.942 ], 00:10:28.942 "product_name": "Malloc disk", 00:10:28.942 "block_size": 512, 00:10:28.942 "num_blocks": 65536, 00:10:28.942 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:28.942 "assigned_rate_limits": { 00:10:28.942 "rw_ios_per_sec": 0, 00:10:28.942 "rw_mbytes_per_sec": 0, 00:10:28.942 "r_mbytes_per_sec": 0, 00:10:28.942 "w_mbytes_per_sec": 0 00:10:28.942 }, 00:10:28.942 "claimed": false, 00:10:28.942 "zoned": false, 00:10:28.942 "supported_io_types": { 00:10:28.942 "read": true, 00:10:28.942 "write": true, 00:10:28.942 "unmap": true, 00:10:28.942 "flush": true, 00:10:28.942 "reset": true, 00:10:28.942 "nvme_admin": false, 00:10:28.942 "nvme_io": false, 00:10:28.942 "nvme_io_md": false, 00:10:28.943 "write_zeroes": true, 00:10:28.943 "zcopy": true, 00:10:28.943 "get_zone_info": false, 00:10:28.943 "zone_management": false, 00:10:28.943 "zone_append": false, 00:10:28.943 "compare": false, 00:10:28.943 "compare_and_write": false, 00:10:28.943 "abort": true, 00:10:28.943 "seek_hole": false, 00:10:28.943 "seek_data": false, 00:10:28.943 "copy": true, 00:10:28.943 "nvme_iov_md": false 00:10:28.943 }, 00:10:28.943 "memory_domains": [ 00:10:28.943 { 00:10:28.943 "dma_device_id": "system", 00:10:28.943 "dma_device_type": 1 00:10:28.943 }, 00:10:28.943 { 00:10:28.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.943 "dma_device_type": 2 00:10:28.943 } 00:10:28.943 ], 00:10:28.943 "driver_specific": {} 00:10:28.943 } 00:10:28.943 ] 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.943 BaseBdev4 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.943 [ 00:10:28.943 { 00:10:28.943 "name": "BaseBdev4", 00:10:28.943 "aliases": [ 00:10:28.943 "59d2f391-428f-40e5-aebc-da1cb12f3840" 00:10:28.943 ], 00:10:28.943 "product_name": "Malloc disk", 00:10:28.943 "block_size": 512, 00:10:28.943 "num_blocks": 65536, 00:10:28.943 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:28.943 "assigned_rate_limits": { 00:10:28.943 "rw_ios_per_sec": 0, 00:10:28.943 "rw_mbytes_per_sec": 0, 00:10:28.943 "r_mbytes_per_sec": 0, 00:10:28.943 "w_mbytes_per_sec": 0 00:10:28.943 }, 00:10:28.943 "claimed": false, 00:10:28.943 "zoned": false, 00:10:28.943 "supported_io_types": { 00:10:28.943 "read": true, 00:10:28.943 "write": true, 00:10:28.943 "unmap": true, 00:10:28.943 "flush": true, 00:10:28.943 "reset": true, 00:10:28.943 "nvme_admin": false, 00:10:28.943 "nvme_io": false, 00:10:28.943 "nvme_io_md": false, 00:10:28.943 "write_zeroes": true, 00:10:28.943 "zcopy": true, 00:10:28.943 "get_zone_info": false, 00:10:28.943 "zone_management": false, 00:10:28.943 "zone_append": false, 00:10:28.943 "compare": false, 00:10:28.943 "compare_and_write": false, 00:10:28.943 "abort": true, 00:10:28.943 "seek_hole": false, 00:10:28.943 "seek_data": false, 00:10:28.943 "copy": true, 00:10:28.943 "nvme_iov_md": false 00:10:28.943 }, 00:10:28.943 "memory_domains": [ 00:10:28.943 { 00:10:28.943 "dma_device_id": "system", 00:10:28.943 "dma_device_type": 1 00:10:28.943 }, 00:10:28.943 { 00:10:28.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.943 "dma_device_type": 2 00:10:28.943 } 00:10:28.943 ], 00:10:28.943 "driver_specific": {} 00:10:28.943 } 00:10:28.943 ] 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.943 [2024-11-16 18:51:12.256256] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:28.943 [2024-11-16 18:51:12.256346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:28.943 [2024-11-16 18:51:12.256409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:28.943 [2024-11-16 18:51:12.258240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:28.943 [2024-11-16 18:51:12.258349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.943 "name": "Existed_Raid", 00:10:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.943 "strip_size_kb": 64, 00:10:28.943 "state": "configuring", 00:10:28.943 "raid_level": "concat", 00:10:28.943 "superblock": false, 00:10:28.943 "num_base_bdevs": 4, 00:10:28.943 "num_base_bdevs_discovered": 3, 00:10:28.943 "num_base_bdevs_operational": 4, 00:10:28.943 "base_bdevs_list": [ 00:10:28.943 { 00:10:28.943 "name": "BaseBdev1", 00:10:28.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.943 "is_configured": false, 00:10:28.943 "data_offset": 0, 00:10:28.943 "data_size": 0 00:10:28.943 }, 00:10:28.943 { 00:10:28.943 "name": "BaseBdev2", 00:10:28.943 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:28.943 "is_configured": true, 00:10:28.943 "data_offset": 0, 00:10:28.943 "data_size": 65536 00:10:28.943 }, 00:10:28.943 { 00:10:28.943 "name": "BaseBdev3", 00:10:28.943 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:28.943 "is_configured": true, 00:10:28.943 "data_offset": 0, 00:10:28.943 "data_size": 65536 00:10:28.943 }, 00:10:28.943 { 00:10:28.943 "name": "BaseBdev4", 00:10:28.943 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:28.943 "is_configured": true, 00:10:28.943 "data_offset": 0, 00:10:28.943 "data_size": 65536 00:10:28.943 } 00:10:28.943 ] 00:10:28.943 }' 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.943 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.512 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:29.512 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.512 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.512 [2024-11-16 18:51:12.723526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:29.512 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.512 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.513 "name": "Existed_Raid", 00:10:29.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.513 "strip_size_kb": 64, 00:10:29.513 "state": "configuring", 00:10:29.513 "raid_level": "concat", 00:10:29.513 "superblock": false, 00:10:29.513 "num_base_bdevs": 4, 00:10:29.513 "num_base_bdevs_discovered": 2, 00:10:29.513 "num_base_bdevs_operational": 4, 00:10:29.513 "base_bdevs_list": [ 00:10:29.513 { 00:10:29.513 "name": "BaseBdev1", 00:10:29.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.513 "is_configured": false, 00:10:29.513 "data_offset": 0, 00:10:29.513 "data_size": 0 00:10:29.513 }, 00:10:29.513 { 00:10:29.513 "name": null, 00:10:29.513 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:29.513 "is_configured": false, 00:10:29.513 "data_offset": 0, 00:10:29.513 "data_size": 65536 00:10:29.513 }, 00:10:29.513 { 00:10:29.513 "name": "BaseBdev3", 00:10:29.513 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:29.513 "is_configured": true, 00:10:29.513 "data_offset": 0, 00:10:29.513 "data_size": 65536 00:10:29.513 }, 00:10:29.513 { 00:10:29.513 "name": "BaseBdev4", 00:10:29.513 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:29.513 "is_configured": true, 00:10:29.513 "data_offset": 0, 00:10:29.513 "data_size": 65536 00:10:29.513 } 00:10:29.513 ] 00:10:29.513 }' 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.513 18:51:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.773 [2024-11-16 18:51:13.239035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.773 BaseBdev1 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.773 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.033 [ 00:10:30.033 { 00:10:30.033 "name": "BaseBdev1", 00:10:30.033 "aliases": [ 00:10:30.033 "0306b86c-0416-4667-8944-8a6438db61d4" 00:10:30.033 ], 00:10:30.033 "product_name": "Malloc disk", 00:10:30.033 "block_size": 512, 00:10:30.033 "num_blocks": 65536, 00:10:30.033 "uuid": "0306b86c-0416-4667-8944-8a6438db61d4", 00:10:30.033 "assigned_rate_limits": { 00:10:30.033 "rw_ios_per_sec": 0, 00:10:30.033 "rw_mbytes_per_sec": 0, 00:10:30.033 "r_mbytes_per_sec": 0, 00:10:30.033 "w_mbytes_per_sec": 0 00:10:30.033 }, 00:10:30.033 "claimed": true, 00:10:30.033 "claim_type": "exclusive_write", 00:10:30.033 "zoned": false, 00:10:30.033 "supported_io_types": { 00:10:30.033 "read": true, 00:10:30.033 "write": true, 00:10:30.033 "unmap": true, 00:10:30.033 "flush": true, 00:10:30.033 "reset": true, 00:10:30.033 "nvme_admin": false, 00:10:30.033 "nvme_io": false, 00:10:30.033 "nvme_io_md": false, 00:10:30.033 "write_zeroes": true, 00:10:30.033 "zcopy": true, 00:10:30.033 "get_zone_info": false, 00:10:30.033 "zone_management": false, 00:10:30.033 "zone_append": false, 00:10:30.033 "compare": false, 00:10:30.033 "compare_and_write": false, 00:10:30.033 "abort": true, 00:10:30.033 "seek_hole": false, 00:10:30.033 "seek_data": false, 00:10:30.033 "copy": true, 00:10:30.033 "nvme_iov_md": false 00:10:30.033 }, 00:10:30.033 "memory_domains": [ 00:10:30.033 { 00:10:30.033 "dma_device_id": "system", 00:10:30.033 "dma_device_type": 1 00:10:30.033 }, 00:10:30.033 { 00:10:30.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.033 "dma_device_type": 2 00:10:30.033 } 00:10:30.033 ], 00:10:30.033 "driver_specific": {} 00:10:30.033 } 00:10:30.033 ] 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.033 "name": "Existed_Raid", 00:10:30.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.033 "strip_size_kb": 64, 00:10:30.033 "state": "configuring", 00:10:30.033 "raid_level": "concat", 00:10:30.033 "superblock": false, 00:10:30.033 "num_base_bdevs": 4, 00:10:30.033 "num_base_bdevs_discovered": 3, 00:10:30.033 "num_base_bdevs_operational": 4, 00:10:30.033 "base_bdevs_list": [ 00:10:30.033 { 00:10:30.033 "name": "BaseBdev1", 00:10:30.033 "uuid": "0306b86c-0416-4667-8944-8a6438db61d4", 00:10:30.033 "is_configured": true, 00:10:30.033 "data_offset": 0, 00:10:30.033 "data_size": 65536 00:10:30.033 }, 00:10:30.033 { 00:10:30.033 "name": null, 00:10:30.033 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:30.033 "is_configured": false, 00:10:30.033 "data_offset": 0, 00:10:30.033 "data_size": 65536 00:10:30.033 }, 00:10:30.033 { 00:10:30.033 "name": "BaseBdev3", 00:10:30.033 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:30.033 "is_configured": true, 00:10:30.033 "data_offset": 0, 00:10:30.033 "data_size": 65536 00:10:30.033 }, 00:10:30.033 { 00:10:30.033 "name": "BaseBdev4", 00:10:30.033 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:30.033 "is_configured": true, 00:10:30.033 "data_offset": 0, 00:10:30.033 "data_size": 65536 00:10:30.033 } 00:10:30.033 ] 00:10:30.033 }' 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.033 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.293 [2024-11-16 18:51:13.742263] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.293 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.553 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.553 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.553 "name": "Existed_Raid", 00:10:30.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.553 "strip_size_kb": 64, 00:10:30.553 "state": "configuring", 00:10:30.553 "raid_level": "concat", 00:10:30.553 "superblock": false, 00:10:30.553 "num_base_bdevs": 4, 00:10:30.553 "num_base_bdevs_discovered": 2, 00:10:30.553 "num_base_bdevs_operational": 4, 00:10:30.553 "base_bdevs_list": [ 00:10:30.553 { 00:10:30.553 "name": "BaseBdev1", 00:10:30.553 "uuid": "0306b86c-0416-4667-8944-8a6438db61d4", 00:10:30.553 "is_configured": true, 00:10:30.553 "data_offset": 0, 00:10:30.553 "data_size": 65536 00:10:30.553 }, 00:10:30.553 { 00:10:30.553 "name": null, 00:10:30.553 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:30.553 "is_configured": false, 00:10:30.553 "data_offset": 0, 00:10:30.553 "data_size": 65536 00:10:30.553 }, 00:10:30.553 { 00:10:30.553 "name": null, 00:10:30.553 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:30.553 "is_configured": false, 00:10:30.553 "data_offset": 0, 00:10:30.553 "data_size": 65536 00:10:30.553 }, 00:10:30.553 { 00:10:30.553 "name": "BaseBdev4", 00:10:30.553 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:30.553 "is_configured": true, 00:10:30.553 "data_offset": 0, 00:10:30.553 "data_size": 65536 00:10:30.553 } 00:10:30.553 ] 00:10:30.553 }' 00:10:30.553 18:51:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.553 18:51:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.813 [2024-11-16 18:51:14.193465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.813 "name": "Existed_Raid", 00:10:30.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.813 "strip_size_kb": 64, 00:10:30.813 "state": "configuring", 00:10:30.813 "raid_level": "concat", 00:10:30.813 "superblock": false, 00:10:30.813 "num_base_bdevs": 4, 00:10:30.813 "num_base_bdevs_discovered": 3, 00:10:30.813 "num_base_bdevs_operational": 4, 00:10:30.813 "base_bdevs_list": [ 00:10:30.813 { 00:10:30.813 "name": "BaseBdev1", 00:10:30.813 "uuid": "0306b86c-0416-4667-8944-8a6438db61d4", 00:10:30.813 "is_configured": true, 00:10:30.813 "data_offset": 0, 00:10:30.813 "data_size": 65536 00:10:30.813 }, 00:10:30.813 { 00:10:30.813 "name": null, 00:10:30.813 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:30.813 "is_configured": false, 00:10:30.813 "data_offset": 0, 00:10:30.813 "data_size": 65536 00:10:30.813 }, 00:10:30.813 { 00:10:30.813 "name": "BaseBdev3", 00:10:30.813 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:30.813 "is_configured": true, 00:10:30.813 "data_offset": 0, 00:10:30.813 "data_size": 65536 00:10:30.813 }, 00:10:30.813 { 00:10:30.813 "name": "BaseBdev4", 00:10:30.813 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:30.813 "is_configured": true, 00:10:30.813 "data_offset": 0, 00:10:30.813 "data_size": 65536 00:10:30.813 } 00:10:30.813 ] 00:10:30.813 }' 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.813 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.383 [2024-11-16 18:51:14.620775] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.383 "name": "Existed_Raid", 00:10:31.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.383 "strip_size_kb": 64, 00:10:31.383 "state": "configuring", 00:10:31.383 "raid_level": "concat", 00:10:31.383 "superblock": false, 00:10:31.383 "num_base_bdevs": 4, 00:10:31.383 "num_base_bdevs_discovered": 2, 00:10:31.383 "num_base_bdevs_operational": 4, 00:10:31.383 "base_bdevs_list": [ 00:10:31.383 { 00:10:31.383 "name": null, 00:10:31.383 "uuid": "0306b86c-0416-4667-8944-8a6438db61d4", 00:10:31.383 "is_configured": false, 00:10:31.383 "data_offset": 0, 00:10:31.383 "data_size": 65536 00:10:31.383 }, 00:10:31.383 { 00:10:31.383 "name": null, 00:10:31.383 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:31.383 "is_configured": false, 00:10:31.383 "data_offset": 0, 00:10:31.383 "data_size": 65536 00:10:31.383 }, 00:10:31.383 { 00:10:31.383 "name": "BaseBdev3", 00:10:31.383 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:31.383 "is_configured": true, 00:10:31.383 "data_offset": 0, 00:10:31.383 "data_size": 65536 00:10:31.383 }, 00:10:31.383 { 00:10:31.383 "name": "BaseBdev4", 00:10:31.383 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:31.383 "is_configured": true, 00:10:31.383 "data_offset": 0, 00:10:31.383 "data_size": 65536 00:10:31.383 } 00:10:31.383 ] 00:10:31.383 }' 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.383 18:51:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.953 [2024-11-16 18:51:15.205758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.953 "name": "Existed_Raid", 00:10:31.953 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.953 "strip_size_kb": 64, 00:10:31.953 "state": "configuring", 00:10:31.953 "raid_level": "concat", 00:10:31.953 "superblock": false, 00:10:31.953 "num_base_bdevs": 4, 00:10:31.953 "num_base_bdevs_discovered": 3, 00:10:31.953 "num_base_bdevs_operational": 4, 00:10:31.953 "base_bdevs_list": [ 00:10:31.953 { 00:10:31.953 "name": null, 00:10:31.953 "uuid": "0306b86c-0416-4667-8944-8a6438db61d4", 00:10:31.953 "is_configured": false, 00:10:31.953 "data_offset": 0, 00:10:31.953 "data_size": 65536 00:10:31.953 }, 00:10:31.953 { 00:10:31.953 "name": "BaseBdev2", 00:10:31.953 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:31.953 "is_configured": true, 00:10:31.953 "data_offset": 0, 00:10:31.953 "data_size": 65536 00:10:31.953 }, 00:10:31.953 { 00:10:31.953 "name": "BaseBdev3", 00:10:31.953 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:31.953 "is_configured": true, 00:10:31.953 "data_offset": 0, 00:10:31.953 "data_size": 65536 00:10:31.953 }, 00:10:31.953 { 00:10:31.953 "name": "BaseBdev4", 00:10:31.953 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:31.953 "is_configured": true, 00:10:31.953 "data_offset": 0, 00:10:31.953 "data_size": 65536 00:10:31.953 } 00:10:31.953 ] 00:10:31.953 }' 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.953 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:32.213 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0306b86c-0416-4667-8944-8a6438db61d4 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.473 [2024-11-16 18:51:15.753106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:32.473 [2024-11-16 18:51:15.753252] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:32.473 [2024-11-16 18:51:15.753278] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:32.473 [2024-11-16 18:51:15.753561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:32.473 [2024-11-16 18:51:15.753767] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:32.473 [2024-11-16 18:51:15.753814] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:32.473 [2024-11-16 18:51:15.754100] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.473 NewBaseBdev 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.473 [ 00:10:32.473 { 00:10:32.473 "name": "NewBaseBdev", 00:10:32.473 "aliases": [ 00:10:32.473 "0306b86c-0416-4667-8944-8a6438db61d4" 00:10:32.473 ], 00:10:32.473 "product_name": "Malloc disk", 00:10:32.473 "block_size": 512, 00:10:32.473 "num_blocks": 65536, 00:10:32.473 "uuid": "0306b86c-0416-4667-8944-8a6438db61d4", 00:10:32.473 "assigned_rate_limits": { 00:10:32.473 "rw_ios_per_sec": 0, 00:10:32.473 "rw_mbytes_per_sec": 0, 00:10:32.473 "r_mbytes_per_sec": 0, 00:10:32.473 "w_mbytes_per_sec": 0 00:10:32.473 }, 00:10:32.473 "claimed": true, 00:10:32.473 "claim_type": "exclusive_write", 00:10:32.473 "zoned": false, 00:10:32.473 "supported_io_types": { 00:10:32.473 "read": true, 00:10:32.473 "write": true, 00:10:32.473 "unmap": true, 00:10:32.473 "flush": true, 00:10:32.473 "reset": true, 00:10:32.473 "nvme_admin": false, 00:10:32.473 "nvme_io": false, 00:10:32.473 "nvme_io_md": false, 00:10:32.473 "write_zeroes": true, 00:10:32.473 "zcopy": true, 00:10:32.473 "get_zone_info": false, 00:10:32.473 "zone_management": false, 00:10:32.473 "zone_append": false, 00:10:32.473 "compare": false, 00:10:32.473 "compare_and_write": false, 00:10:32.473 "abort": true, 00:10:32.473 "seek_hole": false, 00:10:32.473 "seek_data": false, 00:10:32.473 "copy": true, 00:10:32.473 "nvme_iov_md": false 00:10:32.473 }, 00:10:32.473 "memory_domains": [ 00:10:32.473 { 00:10:32.473 "dma_device_id": "system", 00:10:32.473 "dma_device_type": 1 00:10:32.473 }, 00:10:32.473 { 00:10:32.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.473 "dma_device_type": 2 00:10:32.473 } 00:10:32.473 ], 00:10:32.473 "driver_specific": {} 00:10:32.473 } 00:10:32.473 ] 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.473 "name": "Existed_Raid", 00:10:32.473 "uuid": "c4c8373d-7a1f-44a0-bef1-4f216782b304", 00:10:32.473 "strip_size_kb": 64, 00:10:32.473 "state": "online", 00:10:32.473 "raid_level": "concat", 00:10:32.473 "superblock": false, 00:10:32.473 "num_base_bdevs": 4, 00:10:32.473 "num_base_bdevs_discovered": 4, 00:10:32.473 "num_base_bdevs_operational": 4, 00:10:32.473 "base_bdevs_list": [ 00:10:32.473 { 00:10:32.473 "name": "NewBaseBdev", 00:10:32.473 "uuid": "0306b86c-0416-4667-8944-8a6438db61d4", 00:10:32.473 "is_configured": true, 00:10:32.473 "data_offset": 0, 00:10:32.473 "data_size": 65536 00:10:32.473 }, 00:10:32.473 { 00:10:32.473 "name": "BaseBdev2", 00:10:32.473 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:32.473 "is_configured": true, 00:10:32.473 "data_offset": 0, 00:10:32.473 "data_size": 65536 00:10:32.473 }, 00:10:32.473 { 00:10:32.473 "name": "BaseBdev3", 00:10:32.473 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:32.473 "is_configured": true, 00:10:32.473 "data_offset": 0, 00:10:32.473 "data_size": 65536 00:10:32.473 }, 00:10:32.473 { 00:10:32.473 "name": "BaseBdev4", 00:10:32.473 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:32.473 "is_configured": true, 00:10:32.473 "data_offset": 0, 00:10:32.473 "data_size": 65536 00:10:32.473 } 00:10:32.473 ] 00:10:32.473 }' 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.473 18:51:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.733 [2024-11-16 18:51:16.176849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:32.733 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:32.994 "name": "Existed_Raid", 00:10:32.994 "aliases": [ 00:10:32.994 "c4c8373d-7a1f-44a0-bef1-4f216782b304" 00:10:32.994 ], 00:10:32.994 "product_name": "Raid Volume", 00:10:32.994 "block_size": 512, 00:10:32.994 "num_blocks": 262144, 00:10:32.994 "uuid": "c4c8373d-7a1f-44a0-bef1-4f216782b304", 00:10:32.994 "assigned_rate_limits": { 00:10:32.994 "rw_ios_per_sec": 0, 00:10:32.994 "rw_mbytes_per_sec": 0, 00:10:32.994 "r_mbytes_per_sec": 0, 00:10:32.994 "w_mbytes_per_sec": 0 00:10:32.994 }, 00:10:32.994 "claimed": false, 00:10:32.994 "zoned": false, 00:10:32.994 "supported_io_types": { 00:10:32.994 "read": true, 00:10:32.994 "write": true, 00:10:32.994 "unmap": true, 00:10:32.994 "flush": true, 00:10:32.994 "reset": true, 00:10:32.994 "nvme_admin": false, 00:10:32.994 "nvme_io": false, 00:10:32.994 "nvme_io_md": false, 00:10:32.994 "write_zeroes": true, 00:10:32.994 "zcopy": false, 00:10:32.994 "get_zone_info": false, 00:10:32.994 "zone_management": false, 00:10:32.994 "zone_append": false, 00:10:32.994 "compare": false, 00:10:32.994 "compare_and_write": false, 00:10:32.994 "abort": false, 00:10:32.994 "seek_hole": false, 00:10:32.994 "seek_data": false, 00:10:32.994 "copy": false, 00:10:32.994 "nvme_iov_md": false 00:10:32.994 }, 00:10:32.994 "memory_domains": [ 00:10:32.994 { 00:10:32.994 "dma_device_id": "system", 00:10:32.994 "dma_device_type": 1 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.994 "dma_device_type": 2 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "dma_device_id": "system", 00:10:32.994 "dma_device_type": 1 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.994 "dma_device_type": 2 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "dma_device_id": "system", 00:10:32.994 "dma_device_type": 1 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.994 "dma_device_type": 2 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "dma_device_id": "system", 00:10:32.994 "dma_device_type": 1 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.994 "dma_device_type": 2 00:10:32.994 } 00:10:32.994 ], 00:10:32.994 "driver_specific": { 00:10:32.994 "raid": { 00:10:32.994 "uuid": "c4c8373d-7a1f-44a0-bef1-4f216782b304", 00:10:32.994 "strip_size_kb": 64, 00:10:32.994 "state": "online", 00:10:32.994 "raid_level": "concat", 00:10:32.994 "superblock": false, 00:10:32.994 "num_base_bdevs": 4, 00:10:32.994 "num_base_bdevs_discovered": 4, 00:10:32.994 "num_base_bdevs_operational": 4, 00:10:32.994 "base_bdevs_list": [ 00:10:32.994 { 00:10:32.994 "name": "NewBaseBdev", 00:10:32.994 "uuid": "0306b86c-0416-4667-8944-8a6438db61d4", 00:10:32.994 "is_configured": true, 00:10:32.994 "data_offset": 0, 00:10:32.994 "data_size": 65536 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "name": "BaseBdev2", 00:10:32.994 "uuid": "a3ce18b4-d5b0-45a4-b62d-67a3e7a6fc07", 00:10:32.994 "is_configured": true, 00:10:32.994 "data_offset": 0, 00:10:32.994 "data_size": 65536 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "name": "BaseBdev3", 00:10:32.994 "uuid": "f24208c0-0023-422a-bc3b-c17e65c7655d", 00:10:32.994 "is_configured": true, 00:10:32.994 "data_offset": 0, 00:10:32.994 "data_size": 65536 00:10:32.994 }, 00:10:32.994 { 00:10:32.994 "name": "BaseBdev4", 00:10:32.994 "uuid": "59d2f391-428f-40e5-aebc-da1cb12f3840", 00:10:32.994 "is_configured": true, 00:10:32.994 "data_offset": 0, 00:10:32.994 "data_size": 65536 00:10:32.994 } 00:10:32.994 ] 00:10:32.994 } 00:10:32.994 } 00:10:32.994 }' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:32.994 BaseBdev2 00:10:32.994 BaseBdev3 00:10:32.994 BaseBdev4' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.994 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.254 [2024-11-16 18:51:16.479983] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.254 [2024-11-16 18:51:16.480084] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:33.254 [2024-11-16 18:51:16.480214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:33.254 [2024-11-16 18:51:16.480303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:33.254 [2024-11-16 18:51:16.480351] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71061 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71061 ']' 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71061 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71061 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71061' 00:10:33.254 killing process with pid 71061 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71061 00:10:33.254 [2024-11-16 18:51:16.531388] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:33.254 18:51:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71061 00:10:33.514 [2024-11-16 18:51:16.913305] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:34.911 18:51:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:34.911 ************************************ 00:10:34.911 END TEST raid_state_function_test 00:10:34.911 ************************************ 00:10:34.911 00:10:34.911 real 0m11.238s 00:10:34.911 user 0m17.894s 00:10:34.911 sys 0m1.943s 00:10:34.911 18:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:34.911 18:51:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.911 18:51:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:10:34.911 18:51:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:34.911 18:51:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:34.911 18:51:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:34.911 ************************************ 00:10:34.911 START TEST raid_state_function_test_sb 00:10:34.911 ************************************ 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71727 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71727' 00:10:34.911 Process raid pid: 71727 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71727 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71727 ']' 00:10:34.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.911 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.911 [2024-11-16 18:51:18.149252] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:34.911 [2024-11-16 18:51:18.149365] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.911 [2024-11-16 18:51:18.323942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:35.171 [2024-11-16 18:51:18.439042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.171 [2024-11-16 18:51:18.640893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.171 [2024-11-16 18:51:18.640936] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.741 [2024-11-16 18:51:18.982550] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.741 [2024-11-16 18:51:18.982611] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.741 [2024-11-16 18:51:18.982622] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.741 [2024-11-16 18:51:18.982632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.741 [2024-11-16 18:51:18.982638] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.741 [2024-11-16 18:51:18.982646] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.741 [2024-11-16 18:51:18.982661] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:35.741 [2024-11-16 18:51:18.982669] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.741 18:51:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.741 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.741 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.741 "name": "Existed_Raid", 00:10:35.741 "uuid": "8b8a1466-c86d-4f72-bbd4-763ebd078169", 00:10:35.741 "strip_size_kb": 64, 00:10:35.741 "state": "configuring", 00:10:35.741 "raid_level": "concat", 00:10:35.741 "superblock": true, 00:10:35.741 "num_base_bdevs": 4, 00:10:35.741 "num_base_bdevs_discovered": 0, 00:10:35.741 "num_base_bdevs_operational": 4, 00:10:35.741 "base_bdevs_list": [ 00:10:35.741 { 00:10:35.741 "name": "BaseBdev1", 00:10:35.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.741 "is_configured": false, 00:10:35.741 "data_offset": 0, 00:10:35.741 "data_size": 0 00:10:35.741 }, 00:10:35.741 { 00:10:35.741 "name": "BaseBdev2", 00:10:35.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.741 "is_configured": false, 00:10:35.741 "data_offset": 0, 00:10:35.741 "data_size": 0 00:10:35.741 }, 00:10:35.741 { 00:10:35.741 "name": "BaseBdev3", 00:10:35.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.741 "is_configured": false, 00:10:35.741 "data_offset": 0, 00:10:35.741 "data_size": 0 00:10:35.741 }, 00:10:35.741 { 00:10:35.741 "name": "BaseBdev4", 00:10:35.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.741 "is_configured": false, 00:10:35.741 "data_offset": 0, 00:10:35.741 "data_size": 0 00:10:35.741 } 00:10:35.741 ] 00:10:35.741 }' 00:10:35.742 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.742 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.002 [2024-11-16 18:51:19.385810] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.002 [2024-11-16 18:51:19.385905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.002 [2024-11-16 18:51:19.397788] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.002 [2024-11-16 18:51:19.397883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.002 [2024-11-16 18:51:19.397912] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.002 [2024-11-16 18:51:19.397936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.002 [2024-11-16 18:51:19.397954] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.002 [2024-11-16 18:51:19.397976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.002 [2024-11-16 18:51:19.397994] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.002 [2024-11-16 18:51:19.398015] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.002 [2024-11-16 18:51:19.441360] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.002 BaseBdev1 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.002 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.002 [ 00:10:36.002 { 00:10:36.002 "name": "BaseBdev1", 00:10:36.002 "aliases": [ 00:10:36.002 "74f6f185-631f-4cd5-832f-326062012fac" 00:10:36.002 ], 00:10:36.002 "product_name": "Malloc disk", 00:10:36.002 "block_size": 512, 00:10:36.002 "num_blocks": 65536, 00:10:36.002 "uuid": "74f6f185-631f-4cd5-832f-326062012fac", 00:10:36.002 "assigned_rate_limits": { 00:10:36.002 "rw_ios_per_sec": 0, 00:10:36.002 "rw_mbytes_per_sec": 0, 00:10:36.002 "r_mbytes_per_sec": 0, 00:10:36.002 "w_mbytes_per_sec": 0 00:10:36.002 }, 00:10:36.002 "claimed": true, 00:10:36.002 "claim_type": "exclusive_write", 00:10:36.002 "zoned": false, 00:10:36.002 "supported_io_types": { 00:10:36.002 "read": true, 00:10:36.002 "write": true, 00:10:36.002 "unmap": true, 00:10:36.002 "flush": true, 00:10:36.002 "reset": true, 00:10:36.002 "nvme_admin": false, 00:10:36.002 "nvme_io": false, 00:10:36.262 "nvme_io_md": false, 00:10:36.262 "write_zeroes": true, 00:10:36.262 "zcopy": true, 00:10:36.262 "get_zone_info": false, 00:10:36.262 "zone_management": false, 00:10:36.262 "zone_append": false, 00:10:36.262 "compare": false, 00:10:36.262 "compare_and_write": false, 00:10:36.262 "abort": true, 00:10:36.262 "seek_hole": false, 00:10:36.262 "seek_data": false, 00:10:36.262 "copy": true, 00:10:36.262 "nvme_iov_md": false 00:10:36.262 }, 00:10:36.262 "memory_domains": [ 00:10:36.262 { 00:10:36.262 "dma_device_id": "system", 00:10:36.262 "dma_device_type": 1 00:10:36.262 }, 00:10:36.262 { 00:10:36.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.262 "dma_device_type": 2 00:10:36.262 } 00:10:36.262 ], 00:10:36.262 "driver_specific": {} 00:10:36.262 } 00:10:36.262 ] 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.262 "name": "Existed_Raid", 00:10:36.262 "uuid": "dca99022-a58b-49cf-b295-e3b1c4dfc95c", 00:10:36.262 "strip_size_kb": 64, 00:10:36.262 "state": "configuring", 00:10:36.262 "raid_level": "concat", 00:10:36.262 "superblock": true, 00:10:36.262 "num_base_bdevs": 4, 00:10:36.262 "num_base_bdevs_discovered": 1, 00:10:36.262 "num_base_bdevs_operational": 4, 00:10:36.262 "base_bdevs_list": [ 00:10:36.262 { 00:10:36.262 "name": "BaseBdev1", 00:10:36.262 "uuid": "74f6f185-631f-4cd5-832f-326062012fac", 00:10:36.262 "is_configured": true, 00:10:36.262 "data_offset": 2048, 00:10:36.262 "data_size": 63488 00:10:36.262 }, 00:10:36.262 { 00:10:36.262 "name": "BaseBdev2", 00:10:36.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.262 "is_configured": false, 00:10:36.262 "data_offset": 0, 00:10:36.262 "data_size": 0 00:10:36.262 }, 00:10:36.262 { 00:10:36.262 "name": "BaseBdev3", 00:10:36.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.262 "is_configured": false, 00:10:36.262 "data_offset": 0, 00:10:36.262 "data_size": 0 00:10:36.262 }, 00:10:36.262 { 00:10:36.262 "name": "BaseBdev4", 00:10:36.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.262 "is_configured": false, 00:10:36.262 "data_offset": 0, 00:10:36.262 "data_size": 0 00:10:36.262 } 00:10:36.262 ] 00:10:36.262 }' 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.262 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.521 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:36.521 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.521 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.522 [2024-11-16 18:51:19.900670] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:36.522 [2024-11-16 18:51:19.900784] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.522 [2024-11-16 18:51:19.912701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.522 [2024-11-16 18:51:19.914500] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.522 [2024-11-16 18:51:19.914592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.522 [2024-11-16 18:51:19.914621] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.522 [2024-11-16 18:51:19.914645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.522 [2024-11-16 18:51:19.914692] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:36.522 [2024-11-16 18:51:19.914713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.522 "name": "Existed_Raid", 00:10:36.522 "uuid": "f42bbe0b-332f-4f5f-8b08-54e1863e4109", 00:10:36.522 "strip_size_kb": 64, 00:10:36.522 "state": "configuring", 00:10:36.522 "raid_level": "concat", 00:10:36.522 "superblock": true, 00:10:36.522 "num_base_bdevs": 4, 00:10:36.522 "num_base_bdevs_discovered": 1, 00:10:36.522 "num_base_bdevs_operational": 4, 00:10:36.522 "base_bdevs_list": [ 00:10:36.522 { 00:10:36.522 "name": "BaseBdev1", 00:10:36.522 "uuid": "74f6f185-631f-4cd5-832f-326062012fac", 00:10:36.522 "is_configured": true, 00:10:36.522 "data_offset": 2048, 00:10:36.522 "data_size": 63488 00:10:36.522 }, 00:10:36.522 { 00:10:36.522 "name": "BaseBdev2", 00:10:36.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.522 "is_configured": false, 00:10:36.522 "data_offset": 0, 00:10:36.522 "data_size": 0 00:10:36.522 }, 00:10:36.522 { 00:10:36.522 "name": "BaseBdev3", 00:10:36.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.522 "is_configured": false, 00:10:36.522 "data_offset": 0, 00:10:36.522 "data_size": 0 00:10:36.522 }, 00:10:36.522 { 00:10:36.522 "name": "BaseBdev4", 00:10:36.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.522 "is_configured": false, 00:10:36.522 "data_offset": 0, 00:10:36.522 "data_size": 0 00:10:36.522 } 00:10:36.522 ] 00:10:36.522 }' 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.522 18:51:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.091 [2024-11-16 18:51:20.411115] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:37.091 BaseBdev2 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.091 [ 00:10:37.091 { 00:10:37.091 "name": "BaseBdev2", 00:10:37.091 "aliases": [ 00:10:37.091 "548cdbba-f719-43b5-af8f-6d5100bf871c" 00:10:37.091 ], 00:10:37.091 "product_name": "Malloc disk", 00:10:37.091 "block_size": 512, 00:10:37.091 "num_blocks": 65536, 00:10:37.091 "uuid": "548cdbba-f719-43b5-af8f-6d5100bf871c", 00:10:37.091 "assigned_rate_limits": { 00:10:37.091 "rw_ios_per_sec": 0, 00:10:37.091 "rw_mbytes_per_sec": 0, 00:10:37.091 "r_mbytes_per_sec": 0, 00:10:37.091 "w_mbytes_per_sec": 0 00:10:37.091 }, 00:10:37.091 "claimed": true, 00:10:37.091 "claim_type": "exclusive_write", 00:10:37.091 "zoned": false, 00:10:37.091 "supported_io_types": { 00:10:37.091 "read": true, 00:10:37.091 "write": true, 00:10:37.091 "unmap": true, 00:10:37.091 "flush": true, 00:10:37.091 "reset": true, 00:10:37.091 "nvme_admin": false, 00:10:37.091 "nvme_io": false, 00:10:37.091 "nvme_io_md": false, 00:10:37.091 "write_zeroes": true, 00:10:37.091 "zcopy": true, 00:10:37.091 "get_zone_info": false, 00:10:37.091 "zone_management": false, 00:10:37.091 "zone_append": false, 00:10:37.091 "compare": false, 00:10:37.091 "compare_and_write": false, 00:10:37.091 "abort": true, 00:10:37.091 "seek_hole": false, 00:10:37.091 "seek_data": false, 00:10:37.091 "copy": true, 00:10:37.091 "nvme_iov_md": false 00:10:37.091 }, 00:10:37.091 "memory_domains": [ 00:10:37.091 { 00:10:37.091 "dma_device_id": "system", 00:10:37.091 "dma_device_type": 1 00:10:37.091 }, 00:10:37.091 { 00:10:37.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.091 "dma_device_type": 2 00:10:37.091 } 00:10:37.091 ], 00:10:37.091 "driver_specific": {} 00:10:37.091 } 00:10:37.091 ] 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.091 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.092 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.092 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.092 "name": "Existed_Raid", 00:10:37.092 "uuid": "f42bbe0b-332f-4f5f-8b08-54e1863e4109", 00:10:37.092 "strip_size_kb": 64, 00:10:37.092 "state": "configuring", 00:10:37.092 "raid_level": "concat", 00:10:37.092 "superblock": true, 00:10:37.092 "num_base_bdevs": 4, 00:10:37.092 "num_base_bdevs_discovered": 2, 00:10:37.092 "num_base_bdevs_operational": 4, 00:10:37.092 "base_bdevs_list": [ 00:10:37.092 { 00:10:37.092 "name": "BaseBdev1", 00:10:37.092 "uuid": "74f6f185-631f-4cd5-832f-326062012fac", 00:10:37.092 "is_configured": true, 00:10:37.092 "data_offset": 2048, 00:10:37.092 "data_size": 63488 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "name": "BaseBdev2", 00:10:37.092 "uuid": "548cdbba-f719-43b5-af8f-6d5100bf871c", 00:10:37.092 "is_configured": true, 00:10:37.092 "data_offset": 2048, 00:10:37.092 "data_size": 63488 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "name": "BaseBdev3", 00:10:37.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.092 "is_configured": false, 00:10:37.092 "data_offset": 0, 00:10:37.092 "data_size": 0 00:10:37.092 }, 00:10:37.092 { 00:10:37.092 "name": "BaseBdev4", 00:10:37.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.092 "is_configured": false, 00:10:37.092 "data_offset": 0, 00:10:37.092 "data_size": 0 00:10:37.092 } 00:10:37.092 ] 00:10:37.092 }' 00:10:37.092 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.092 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.661 [2024-11-16 18:51:20.955950] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.661 BaseBdev3 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.661 [ 00:10:37.661 { 00:10:37.661 "name": "BaseBdev3", 00:10:37.661 "aliases": [ 00:10:37.661 "1e316f4d-8b06-4bd3-9c06-b66844550087" 00:10:37.661 ], 00:10:37.661 "product_name": "Malloc disk", 00:10:37.661 "block_size": 512, 00:10:37.661 "num_blocks": 65536, 00:10:37.661 "uuid": "1e316f4d-8b06-4bd3-9c06-b66844550087", 00:10:37.661 "assigned_rate_limits": { 00:10:37.661 "rw_ios_per_sec": 0, 00:10:37.661 "rw_mbytes_per_sec": 0, 00:10:37.661 "r_mbytes_per_sec": 0, 00:10:37.661 "w_mbytes_per_sec": 0 00:10:37.661 }, 00:10:37.661 "claimed": true, 00:10:37.661 "claim_type": "exclusive_write", 00:10:37.661 "zoned": false, 00:10:37.661 "supported_io_types": { 00:10:37.661 "read": true, 00:10:37.661 "write": true, 00:10:37.661 "unmap": true, 00:10:37.661 "flush": true, 00:10:37.661 "reset": true, 00:10:37.661 "nvme_admin": false, 00:10:37.661 "nvme_io": false, 00:10:37.661 "nvme_io_md": false, 00:10:37.661 "write_zeroes": true, 00:10:37.661 "zcopy": true, 00:10:37.661 "get_zone_info": false, 00:10:37.661 "zone_management": false, 00:10:37.661 "zone_append": false, 00:10:37.661 "compare": false, 00:10:37.661 "compare_and_write": false, 00:10:37.661 "abort": true, 00:10:37.661 "seek_hole": false, 00:10:37.661 "seek_data": false, 00:10:37.661 "copy": true, 00:10:37.661 "nvme_iov_md": false 00:10:37.661 }, 00:10:37.661 "memory_domains": [ 00:10:37.661 { 00:10:37.661 "dma_device_id": "system", 00:10:37.661 "dma_device_type": 1 00:10:37.661 }, 00:10:37.661 { 00:10:37.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.661 "dma_device_type": 2 00:10:37.661 } 00:10:37.661 ], 00:10:37.661 "driver_specific": {} 00:10:37.661 } 00:10:37.661 ] 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:37.661 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.662 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.662 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.662 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.662 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:37.662 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.662 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.662 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.662 18:51:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.662 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.662 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.662 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.662 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.662 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.662 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.662 "name": "Existed_Raid", 00:10:37.662 "uuid": "f42bbe0b-332f-4f5f-8b08-54e1863e4109", 00:10:37.662 "strip_size_kb": 64, 00:10:37.662 "state": "configuring", 00:10:37.662 "raid_level": "concat", 00:10:37.662 "superblock": true, 00:10:37.662 "num_base_bdevs": 4, 00:10:37.662 "num_base_bdevs_discovered": 3, 00:10:37.662 "num_base_bdevs_operational": 4, 00:10:37.662 "base_bdevs_list": [ 00:10:37.662 { 00:10:37.662 "name": "BaseBdev1", 00:10:37.662 "uuid": "74f6f185-631f-4cd5-832f-326062012fac", 00:10:37.662 "is_configured": true, 00:10:37.662 "data_offset": 2048, 00:10:37.662 "data_size": 63488 00:10:37.662 }, 00:10:37.662 { 00:10:37.662 "name": "BaseBdev2", 00:10:37.662 "uuid": "548cdbba-f719-43b5-af8f-6d5100bf871c", 00:10:37.662 "is_configured": true, 00:10:37.662 "data_offset": 2048, 00:10:37.662 "data_size": 63488 00:10:37.662 }, 00:10:37.662 { 00:10:37.662 "name": "BaseBdev3", 00:10:37.662 "uuid": "1e316f4d-8b06-4bd3-9c06-b66844550087", 00:10:37.662 "is_configured": true, 00:10:37.662 "data_offset": 2048, 00:10:37.662 "data_size": 63488 00:10:37.662 }, 00:10:37.662 { 00:10:37.662 "name": "BaseBdev4", 00:10:37.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.662 "is_configured": false, 00:10:37.662 "data_offset": 0, 00:10:37.662 "data_size": 0 00:10:37.662 } 00:10:37.662 ] 00:10:37.662 }' 00:10:37.662 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.662 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.232 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:38.232 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.232 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.232 [2024-11-16 18:51:21.459319] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:38.232 [2024-11-16 18:51:21.459561] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:38.232 [2024-11-16 18:51:21.459577] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:38.232 [2024-11-16 18:51:21.459891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:38.232 [2024-11-16 18:51:21.460085] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:38.232 [2024-11-16 18:51:21.460104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:38.232 BaseBdev4 00:10:38.232 [2024-11-16 18:51:21.460272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:38.232 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.232 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.233 [ 00:10:38.233 { 00:10:38.233 "name": "BaseBdev4", 00:10:38.233 "aliases": [ 00:10:38.233 "1d380f8c-a728-475c-829e-05360d43b51d" 00:10:38.233 ], 00:10:38.233 "product_name": "Malloc disk", 00:10:38.233 "block_size": 512, 00:10:38.233 "num_blocks": 65536, 00:10:38.233 "uuid": "1d380f8c-a728-475c-829e-05360d43b51d", 00:10:38.233 "assigned_rate_limits": { 00:10:38.233 "rw_ios_per_sec": 0, 00:10:38.233 "rw_mbytes_per_sec": 0, 00:10:38.233 "r_mbytes_per_sec": 0, 00:10:38.233 "w_mbytes_per_sec": 0 00:10:38.233 }, 00:10:38.233 "claimed": true, 00:10:38.233 "claim_type": "exclusive_write", 00:10:38.233 "zoned": false, 00:10:38.233 "supported_io_types": { 00:10:38.233 "read": true, 00:10:38.233 "write": true, 00:10:38.233 "unmap": true, 00:10:38.233 "flush": true, 00:10:38.233 "reset": true, 00:10:38.233 "nvme_admin": false, 00:10:38.233 "nvme_io": false, 00:10:38.233 "nvme_io_md": false, 00:10:38.233 "write_zeroes": true, 00:10:38.233 "zcopy": true, 00:10:38.233 "get_zone_info": false, 00:10:38.233 "zone_management": false, 00:10:38.233 "zone_append": false, 00:10:38.233 "compare": false, 00:10:38.233 "compare_and_write": false, 00:10:38.233 "abort": true, 00:10:38.233 "seek_hole": false, 00:10:38.233 "seek_data": false, 00:10:38.233 "copy": true, 00:10:38.233 "nvme_iov_md": false 00:10:38.233 }, 00:10:38.233 "memory_domains": [ 00:10:38.233 { 00:10:38.233 "dma_device_id": "system", 00:10:38.233 "dma_device_type": 1 00:10:38.233 }, 00:10:38.233 { 00:10:38.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.233 "dma_device_type": 2 00:10:38.233 } 00:10:38.233 ], 00:10:38.233 "driver_specific": {} 00:10:38.233 } 00:10:38.233 ] 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.233 "name": "Existed_Raid", 00:10:38.233 "uuid": "f42bbe0b-332f-4f5f-8b08-54e1863e4109", 00:10:38.233 "strip_size_kb": 64, 00:10:38.233 "state": "online", 00:10:38.233 "raid_level": "concat", 00:10:38.233 "superblock": true, 00:10:38.233 "num_base_bdevs": 4, 00:10:38.233 "num_base_bdevs_discovered": 4, 00:10:38.233 "num_base_bdevs_operational": 4, 00:10:38.233 "base_bdevs_list": [ 00:10:38.233 { 00:10:38.233 "name": "BaseBdev1", 00:10:38.233 "uuid": "74f6f185-631f-4cd5-832f-326062012fac", 00:10:38.233 "is_configured": true, 00:10:38.233 "data_offset": 2048, 00:10:38.233 "data_size": 63488 00:10:38.233 }, 00:10:38.233 { 00:10:38.233 "name": "BaseBdev2", 00:10:38.233 "uuid": "548cdbba-f719-43b5-af8f-6d5100bf871c", 00:10:38.233 "is_configured": true, 00:10:38.233 "data_offset": 2048, 00:10:38.233 "data_size": 63488 00:10:38.233 }, 00:10:38.233 { 00:10:38.233 "name": "BaseBdev3", 00:10:38.233 "uuid": "1e316f4d-8b06-4bd3-9c06-b66844550087", 00:10:38.233 "is_configured": true, 00:10:38.233 "data_offset": 2048, 00:10:38.233 "data_size": 63488 00:10:38.233 }, 00:10:38.233 { 00:10:38.233 "name": "BaseBdev4", 00:10:38.233 "uuid": "1d380f8c-a728-475c-829e-05360d43b51d", 00:10:38.233 "is_configured": true, 00:10:38.233 "data_offset": 2048, 00:10:38.233 "data_size": 63488 00:10:38.233 } 00:10:38.233 ] 00:10:38.233 }' 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.233 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.493 [2024-11-16 18:51:21.914969] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.493 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:38.493 "name": "Existed_Raid", 00:10:38.493 "aliases": [ 00:10:38.493 "f42bbe0b-332f-4f5f-8b08-54e1863e4109" 00:10:38.493 ], 00:10:38.493 "product_name": "Raid Volume", 00:10:38.493 "block_size": 512, 00:10:38.493 "num_blocks": 253952, 00:10:38.493 "uuid": "f42bbe0b-332f-4f5f-8b08-54e1863e4109", 00:10:38.493 "assigned_rate_limits": { 00:10:38.493 "rw_ios_per_sec": 0, 00:10:38.493 "rw_mbytes_per_sec": 0, 00:10:38.493 "r_mbytes_per_sec": 0, 00:10:38.493 "w_mbytes_per_sec": 0 00:10:38.493 }, 00:10:38.493 "claimed": false, 00:10:38.493 "zoned": false, 00:10:38.493 "supported_io_types": { 00:10:38.493 "read": true, 00:10:38.493 "write": true, 00:10:38.493 "unmap": true, 00:10:38.494 "flush": true, 00:10:38.494 "reset": true, 00:10:38.494 "nvme_admin": false, 00:10:38.494 "nvme_io": false, 00:10:38.494 "nvme_io_md": false, 00:10:38.494 "write_zeroes": true, 00:10:38.494 "zcopy": false, 00:10:38.494 "get_zone_info": false, 00:10:38.494 "zone_management": false, 00:10:38.494 "zone_append": false, 00:10:38.494 "compare": false, 00:10:38.494 "compare_and_write": false, 00:10:38.494 "abort": false, 00:10:38.494 "seek_hole": false, 00:10:38.494 "seek_data": false, 00:10:38.494 "copy": false, 00:10:38.494 "nvme_iov_md": false 00:10:38.494 }, 00:10:38.494 "memory_domains": [ 00:10:38.494 { 00:10:38.494 "dma_device_id": "system", 00:10:38.494 "dma_device_type": 1 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.494 "dma_device_type": 2 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "dma_device_id": "system", 00:10:38.494 "dma_device_type": 1 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.494 "dma_device_type": 2 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "dma_device_id": "system", 00:10:38.494 "dma_device_type": 1 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.494 "dma_device_type": 2 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "dma_device_id": "system", 00:10:38.494 "dma_device_type": 1 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.494 "dma_device_type": 2 00:10:38.494 } 00:10:38.494 ], 00:10:38.494 "driver_specific": { 00:10:38.494 "raid": { 00:10:38.494 "uuid": "f42bbe0b-332f-4f5f-8b08-54e1863e4109", 00:10:38.494 "strip_size_kb": 64, 00:10:38.494 "state": "online", 00:10:38.494 "raid_level": "concat", 00:10:38.494 "superblock": true, 00:10:38.494 "num_base_bdevs": 4, 00:10:38.494 "num_base_bdevs_discovered": 4, 00:10:38.494 "num_base_bdevs_operational": 4, 00:10:38.494 "base_bdevs_list": [ 00:10:38.494 { 00:10:38.494 "name": "BaseBdev1", 00:10:38.494 "uuid": "74f6f185-631f-4cd5-832f-326062012fac", 00:10:38.494 "is_configured": true, 00:10:38.494 "data_offset": 2048, 00:10:38.494 "data_size": 63488 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "name": "BaseBdev2", 00:10:38.494 "uuid": "548cdbba-f719-43b5-af8f-6d5100bf871c", 00:10:38.494 "is_configured": true, 00:10:38.494 "data_offset": 2048, 00:10:38.494 "data_size": 63488 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "name": "BaseBdev3", 00:10:38.494 "uuid": "1e316f4d-8b06-4bd3-9c06-b66844550087", 00:10:38.494 "is_configured": true, 00:10:38.494 "data_offset": 2048, 00:10:38.494 "data_size": 63488 00:10:38.494 }, 00:10:38.494 { 00:10:38.494 "name": "BaseBdev4", 00:10:38.494 "uuid": "1d380f8c-a728-475c-829e-05360d43b51d", 00:10:38.494 "is_configured": true, 00:10:38.494 "data_offset": 2048, 00:10:38.494 "data_size": 63488 00:10:38.494 } 00:10:38.494 ] 00:10:38.494 } 00:10:38.494 } 00:10:38.494 }' 00:10:38.494 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:38.754 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:38.754 BaseBdev2 00:10:38.754 BaseBdev3 00:10:38.754 BaseBdev4' 00:10:38.754 18:51:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.754 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.755 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.015 [2024-11-16 18:51:22.226122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:39.015 [2024-11-16 18:51:22.226153] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:39.015 [2024-11-16 18:51:22.226206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.015 "name": "Existed_Raid", 00:10:39.015 "uuid": "f42bbe0b-332f-4f5f-8b08-54e1863e4109", 00:10:39.015 "strip_size_kb": 64, 00:10:39.015 "state": "offline", 00:10:39.015 "raid_level": "concat", 00:10:39.015 "superblock": true, 00:10:39.015 "num_base_bdevs": 4, 00:10:39.015 "num_base_bdevs_discovered": 3, 00:10:39.015 "num_base_bdevs_operational": 3, 00:10:39.015 "base_bdevs_list": [ 00:10:39.015 { 00:10:39.015 "name": null, 00:10:39.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.015 "is_configured": false, 00:10:39.015 "data_offset": 0, 00:10:39.015 "data_size": 63488 00:10:39.015 }, 00:10:39.015 { 00:10:39.015 "name": "BaseBdev2", 00:10:39.015 "uuid": "548cdbba-f719-43b5-af8f-6d5100bf871c", 00:10:39.015 "is_configured": true, 00:10:39.015 "data_offset": 2048, 00:10:39.015 "data_size": 63488 00:10:39.015 }, 00:10:39.015 { 00:10:39.015 "name": "BaseBdev3", 00:10:39.015 "uuid": "1e316f4d-8b06-4bd3-9c06-b66844550087", 00:10:39.015 "is_configured": true, 00:10:39.015 "data_offset": 2048, 00:10:39.015 "data_size": 63488 00:10:39.015 }, 00:10:39.015 { 00:10:39.015 "name": "BaseBdev4", 00:10:39.015 "uuid": "1d380f8c-a728-475c-829e-05360d43b51d", 00:10:39.015 "is_configured": true, 00:10:39.015 "data_offset": 2048, 00:10:39.015 "data_size": 63488 00:10:39.015 } 00:10:39.015 ] 00:10:39.015 }' 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.015 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.275 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:39.275 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.275 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.275 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.275 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.275 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.275 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.534 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.535 [2024-11-16 18:51:22.764846] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.535 18:51:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.535 [2024-11-16 18:51:22.915583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.794 [2024-11-16 18:51:23.054362] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:39.794 [2024-11-16 18:51:23.054414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:39.794 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.795 BaseBdev2 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.795 [ 00:10:39.795 { 00:10:39.795 "name": "BaseBdev2", 00:10:39.795 "aliases": [ 00:10:39.795 "8ba563aa-b7db-4fe6-9a8b-e998e8d07241" 00:10:39.795 ], 00:10:39.795 "product_name": "Malloc disk", 00:10:39.795 "block_size": 512, 00:10:39.795 "num_blocks": 65536, 00:10:39.795 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:39.795 "assigned_rate_limits": { 00:10:39.795 "rw_ios_per_sec": 0, 00:10:39.795 "rw_mbytes_per_sec": 0, 00:10:39.795 "r_mbytes_per_sec": 0, 00:10:39.795 "w_mbytes_per_sec": 0 00:10:39.795 }, 00:10:39.795 "claimed": false, 00:10:39.795 "zoned": false, 00:10:39.795 "supported_io_types": { 00:10:39.795 "read": true, 00:10:39.795 "write": true, 00:10:39.795 "unmap": true, 00:10:39.795 "flush": true, 00:10:39.795 "reset": true, 00:10:39.795 "nvme_admin": false, 00:10:39.795 "nvme_io": false, 00:10:39.795 "nvme_io_md": false, 00:10:39.795 "write_zeroes": true, 00:10:39.795 "zcopy": true, 00:10:39.795 "get_zone_info": false, 00:10:39.795 "zone_management": false, 00:10:39.795 "zone_append": false, 00:10:39.795 "compare": false, 00:10:39.795 "compare_and_write": false, 00:10:39.795 "abort": true, 00:10:39.795 "seek_hole": false, 00:10:39.795 "seek_data": false, 00:10:39.795 "copy": true, 00:10:39.795 "nvme_iov_md": false 00:10:39.795 }, 00:10:39.795 "memory_domains": [ 00:10:39.795 { 00:10:39.795 "dma_device_id": "system", 00:10:39.795 "dma_device_type": 1 00:10:39.795 }, 00:10:39.795 { 00:10:39.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.795 "dma_device_type": 2 00:10:39.795 } 00:10:39.795 ], 00:10:39.795 "driver_specific": {} 00:10:39.795 } 00:10:39.795 ] 00:10:39.795 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.055 BaseBdev3 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.055 [ 00:10:40.055 { 00:10:40.055 "name": "BaseBdev3", 00:10:40.055 "aliases": [ 00:10:40.055 "57949278-31e9-4d2b-a0c8-3bdd702854cb" 00:10:40.055 ], 00:10:40.055 "product_name": "Malloc disk", 00:10:40.055 "block_size": 512, 00:10:40.055 "num_blocks": 65536, 00:10:40.055 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:40.055 "assigned_rate_limits": { 00:10:40.055 "rw_ios_per_sec": 0, 00:10:40.055 "rw_mbytes_per_sec": 0, 00:10:40.055 "r_mbytes_per_sec": 0, 00:10:40.055 "w_mbytes_per_sec": 0 00:10:40.055 }, 00:10:40.055 "claimed": false, 00:10:40.055 "zoned": false, 00:10:40.055 "supported_io_types": { 00:10:40.055 "read": true, 00:10:40.055 "write": true, 00:10:40.055 "unmap": true, 00:10:40.055 "flush": true, 00:10:40.055 "reset": true, 00:10:40.055 "nvme_admin": false, 00:10:40.055 "nvme_io": false, 00:10:40.055 "nvme_io_md": false, 00:10:40.055 "write_zeroes": true, 00:10:40.055 "zcopy": true, 00:10:40.055 "get_zone_info": false, 00:10:40.055 "zone_management": false, 00:10:40.055 "zone_append": false, 00:10:40.055 "compare": false, 00:10:40.055 "compare_and_write": false, 00:10:40.055 "abort": true, 00:10:40.055 "seek_hole": false, 00:10:40.055 "seek_data": false, 00:10:40.055 "copy": true, 00:10:40.055 "nvme_iov_md": false 00:10:40.055 }, 00:10:40.055 "memory_domains": [ 00:10:40.055 { 00:10:40.055 "dma_device_id": "system", 00:10:40.055 "dma_device_type": 1 00:10:40.055 }, 00:10:40.055 { 00:10:40.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.055 "dma_device_type": 2 00:10:40.055 } 00:10:40.055 ], 00:10:40.055 "driver_specific": {} 00:10:40.055 } 00:10:40.055 ] 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.055 BaseBdev4 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.055 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.056 [ 00:10:40.056 { 00:10:40.056 "name": "BaseBdev4", 00:10:40.056 "aliases": [ 00:10:40.056 "eb654df7-cb90-48a1-b738-3a3315656796" 00:10:40.056 ], 00:10:40.056 "product_name": "Malloc disk", 00:10:40.056 "block_size": 512, 00:10:40.056 "num_blocks": 65536, 00:10:40.056 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:40.056 "assigned_rate_limits": { 00:10:40.056 "rw_ios_per_sec": 0, 00:10:40.056 "rw_mbytes_per_sec": 0, 00:10:40.056 "r_mbytes_per_sec": 0, 00:10:40.056 "w_mbytes_per_sec": 0 00:10:40.056 }, 00:10:40.056 "claimed": false, 00:10:40.056 "zoned": false, 00:10:40.056 "supported_io_types": { 00:10:40.056 "read": true, 00:10:40.056 "write": true, 00:10:40.056 "unmap": true, 00:10:40.056 "flush": true, 00:10:40.056 "reset": true, 00:10:40.056 "nvme_admin": false, 00:10:40.056 "nvme_io": false, 00:10:40.056 "nvme_io_md": false, 00:10:40.056 "write_zeroes": true, 00:10:40.056 "zcopy": true, 00:10:40.056 "get_zone_info": false, 00:10:40.056 "zone_management": false, 00:10:40.056 "zone_append": false, 00:10:40.056 "compare": false, 00:10:40.056 "compare_and_write": false, 00:10:40.056 "abort": true, 00:10:40.056 "seek_hole": false, 00:10:40.056 "seek_data": false, 00:10:40.056 "copy": true, 00:10:40.056 "nvme_iov_md": false 00:10:40.056 }, 00:10:40.056 "memory_domains": [ 00:10:40.056 { 00:10:40.056 "dma_device_id": "system", 00:10:40.056 "dma_device_type": 1 00:10:40.056 }, 00:10:40.056 { 00:10:40.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.056 "dma_device_type": 2 00:10:40.056 } 00:10:40.056 ], 00:10:40.056 "driver_specific": {} 00:10:40.056 } 00:10:40.056 ] 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.056 [2024-11-16 18:51:23.400138] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:40.056 [2024-11-16 18:51:23.400228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:40.056 [2024-11-16 18:51:23.400292] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.056 [2024-11-16 18:51:23.402175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.056 [2024-11-16 18:51:23.402276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.056 "name": "Existed_Raid", 00:10:40.056 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:40.056 "strip_size_kb": 64, 00:10:40.056 "state": "configuring", 00:10:40.056 "raid_level": "concat", 00:10:40.056 "superblock": true, 00:10:40.056 "num_base_bdevs": 4, 00:10:40.056 "num_base_bdevs_discovered": 3, 00:10:40.056 "num_base_bdevs_operational": 4, 00:10:40.056 "base_bdevs_list": [ 00:10:40.056 { 00:10:40.056 "name": "BaseBdev1", 00:10:40.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.056 "is_configured": false, 00:10:40.056 "data_offset": 0, 00:10:40.056 "data_size": 0 00:10:40.056 }, 00:10:40.056 { 00:10:40.056 "name": "BaseBdev2", 00:10:40.056 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:40.056 "is_configured": true, 00:10:40.056 "data_offset": 2048, 00:10:40.056 "data_size": 63488 00:10:40.056 }, 00:10:40.056 { 00:10:40.056 "name": "BaseBdev3", 00:10:40.056 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:40.056 "is_configured": true, 00:10:40.056 "data_offset": 2048, 00:10:40.056 "data_size": 63488 00:10:40.056 }, 00:10:40.056 { 00:10:40.056 "name": "BaseBdev4", 00:10:40.056 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:40.056 "is_configured": true, 00:10:40.056 "data_offset": 2048, 00:10:40.056 "data_size": 63488 00:10:40.056 } 00:10:40.056 ] 00:10:40.056 }' 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.056 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.627 [2024-11-16 18:51:23.819463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.627 "name": "Existed_Raid", 00:10:40.627 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:40.627 "strip_size_kb": 64, 00:10:40.627 "state": "configuring", 00:10:40.627 "raid_level": "concat", 00:10:40.627 "superblock": true, 00:10:40.627 "num_base_bdevs": 4, 00:10:40.627 "num_base_bdevs_discovered": 2, 00:10:40.627 "num_base_bdevs_operational": 4, 00:10:40.627 "base_bdevs_list": [ 00:10:40.627 { 00:10:40.627 "name": "BaseBdev1", 00:10:40.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.627 "is_configured": false, 00:10:40.627 "data_offset": 0, 00:10:40.627 "data_size": 0 00:10:40.627 }, 00:10:40.627 { 00:10:40.627 "name": null, 00:10:40.627 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:40.627 "is_configured": false, 00:10:40.627 "data_offset": 0, 00:10:40.627 "data_size": 63488 00:10:40.627 }, 00:10:40.627 { 00:10:40.627 "name": "BaseBdev3", 00:10:40.627 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:40.627 "is_configured": true, 00:10:40.627 "data_offset": 2048, 00:10:40.627 "data_size": 63488 00:10:40.627 }, 00:10:40.627 { 00:10:40.627 "name": "BaseBdev4", 00:10:40.627 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:40.627 "is_configured": true, 00:10:40.627 "data_offset": 2048, 00:10:40.627 "data_size": 63488 00:10:40.627 } 00:10:40.627 ] 00:10:40.627 }' 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.627 18:51:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.887 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.888 [2024-11-16 18:51:24.306905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:40.888 BaseBdev1 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.888 [ 00:10:40.888 { 00:10:40.888 "name": "BaseBdev1", 00:10:40.888 "aliases": [ 00:10:40.888 "91546fc2-374f-46b3-aac1-2673a6c2c529" 00:10:40.888 ], 00:10:40.888 "product_name": "Malloc disk", 00:10:40.888 "block_size": 512, 00:10:40.888 "num_blocks": 65536, 00:10:40.888 "uuid": "91546fc2-374f-46b3-aac1-2673a6c2c529", 00:10:40.888 "assigned_rate_limits": { 00:10:40.888 "rw_ios_per_sec": 0, 00:10:40.888 "rw_mbytes_per_sec": 0, 00:10:40.888 "r_mbytes_per_sec": 0, 00:10:40.888 "w_mbytes_per_sec": 0 00:10:40.888 }, 00:10:40.888 "claimed": true, 00:10:40.888 "claim_type": "exclusive_write", 00:10:40.888 "zoned": false, 00:10:40.888 "supported_io_types": { 00:10:40.888 "read": true, 00:10:40.888 "write": true, 00:10:40.888 "unmap": true, 00:10:40.888 "flush": true, 00:10:40.888 "reset": true, 00:10:40.888 "nvme_admin": false, 00:10:40.888 "nvme_io": false, 00:10:40.888 "nvme_io_md": false, 00:10:40.888 "write_zeroes": true, 00:10:40.888 "zcopy": true, 00:10:40.888 "get_zone_info": false, 00:10:40.888 "zone_management": false, 00:10:40.888 "zone_append": false, 00:10:40.888 "compare": false, 00:10:40.888 "compare_and_write": false, 00:10:40.888 "abort": true, 00:10:40.888 "seek_hole": false, 00:10:40.888 "seek_data": false, 00:10:40.888 "copy": true, 00:10:40.888 "nvme_iov_md": false 00:10:40.888 }, 00:10:40.888 "memory_domains": [ 00:10:40.888 { 00:10:40.888 "dma_device_id": "system", 00:10:40.888 "dma_device_type": 1 00:10:40.888 }, 00:10:40.888 { 00:10:40.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.888 "dma_device_type": 2 00:10:40.888 } 00:10:40.888 ], 00:10:40.888 "driver_specific": {} 00:10:40.888 } 00:10:40.888 ] 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.888 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.148 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.149 "name": "Existed_Raid", 00:10:41.149 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:41.149 "strip_size_kb": 64, 00:10:41.149 "state": "configuring", 00:10:41.149 "raid_level": "concat", 00:10:41.149 "superblock": true, 00:10:41.149 "num_base_bdevs": 4, 00:10:41.149 "num_base_bdevs_discovered": 3, 00:10:41.149 "num_base_bdevs_operational": 4, 00:10:41.149 "base_bdevs_list": [ 00:10:41.149 { 00:10:41.149 "name": "BaseBdev1", 00:10:41.149 "uuid": "91546fc2-374f-46b3-aac1-2673a6c2c529", 00:10:41.149 "is_configured": true, 00:10:41.149 "data_offset": 2048, 00:10:41.149 "data_size": 63488 00:10:41.149 }, 00:10:41.149 { 00:10:41.149 "name": null, 00:10:41.149 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:41.149 "is_configured": false, 00:10:41.149 "data_offset": 0, 00:10:41.149 "data_size": 63488 00:10:41.149 }, 00:10:41.149 { 00:10:41.149 "name": "BaseBdev3", 00:10:41.149 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:41.149 "is_configured": true, 00:10:41.149 "data_offset": 2048, 00:10:41.149 "data_size": 63488 00:10:41.149 }, 00:10:41.149 { 00:10:41.149 "name": "BaseBdev4", 00:10:41.149 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:41.149 "is_configured": true, 00:10:41.149 "data_offset": 2048, 00:10:41.149 "data_size": 63488 00:10:41.149 } 00:10:41.149 ] 00:10:41.149 }' 00:10:41.149 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.149 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.408 [2024-11-16 18:51:24.846105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.408 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.409 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.668 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.668 "name": "Existed_Raid", 00:10:41.668 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:41.668 "strip_size_kb": 64, 00:10:41.668 "state": "configuring", 00:10:41.668 "raid_level": "concat", 00:10:41.668 "superblock": true, 00:10:41.668 "num_base_bdevs": 4, 00:10:41.668 "num_base_bdevs_discovered": 2, 00:10:41.668 "num_base_bdevs_operational": 4, 00:10:41.668 "base_bdevs_list": [ 00:10:41.668 { 00:10:41.668 "name": "BaseBdev1", 00:10:41.668 "uuid": "91546fc2-374f-46b3-aac1-2673a6c2c529", 00:10:41.668 "is_configured": true, 00:10:41.668 "data_offset": 2048, 00:10:41.668 "data_size": 63488 00:10:41.668 }, 00:10:41.668 { 00:10:41.668 "name": null, 00:10:41.668 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:41.668 "is_configured": false, 00:10:41.668 "data_offset": 0, 00:10:41.668 "data_size": 63488 00:10:41.668 }, 00:10:41.668 { 00:10:41.668 "name": null, 00:10:41.668 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:41.668 "is_configured": false, 00:10:41.668 "data_offset": 0, 00:10:41.668 "data_size": 63488 00:10:41.668 }, 00:10:41.668 { 00:10:41.668 "name": "BaseBdev4", 00:10:41.668 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:41.668 "is_configured": true, 00:10:41.668 "data_offset": 2048, 00:10:41.668 "data_size": 63488 00:10:41.668 } 00:10:41.668 ] 00:10:41.668 }' 00:10:41.668 18:51:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.668 18:51:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.928 [2024-11-16 18:51:25.357201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.928 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.187 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.187 "name": "Existed_Raid", 00:10:42.187 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:42.187 "strip_size_kb": 64, 00:10:42.187 "state": "configuring", 00:10:42.187 "raid_level": "concat", 00:10:42.187 "superblock": true, 00:10:42.187 "num_base_bdevs": 4, 00:10:42.187 "num_base_bdevs_discovered": 3, 00:10:42.187 "num_base_bdevs_operational": 4, 00:10:42.187 "base_bdevs_list": [ 00:10:42.187 { 00:10:42.187 "name": "BaseBdev1", 00:10:42.187 "uuid": "91546fc2-374f-46b3-aac1-2673a6c2c529", 00:10:42.187 "is_configured": true, 00:10:42.187 "data_offset": 2048, 00:10:42.187 "data_size": 63488 00:10:42.187 }, 00:10:42.187 { 00:10:42.187 "name": null, 00:10:42.187 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:42.187 "is_configured": false, 00:10:42.187 "data_offset": 0, 00:10:42.187 "data_size": 63488 00:10:42.187 }, 00:10:42.187 { 00:10:42.187 "name": "BaseBdev3", 00:10:42.187 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:42.187 "is_configured": true, 00:10:42.187 "data_offset": 2048, 00:10:42.187 "data_size": 63488 00:10:42.187 }, 00:10:42.187 { 00:10:42.187 "name": "BaseBdev4", 00:10:42.187 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:42.187 "is_configured": true, 00:10:42.187 "data_offset": 2048, 00:10:42.187 "data_size": 63488 00:10:42.187 } 00:10:42.187 ] 00:10:42.187 }' 00:10:42.187 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.187 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.447 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:42.447 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.447 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.447 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.447 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.447 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:42.447 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:42.447 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.447 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.447 [2024-11-16 18:51:25.832452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.707 "name": "Existed_Raid", 00:10:42.707 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:42.707 "strip_size_kb": 64, 00:10:42.707 "state": "configuring", 00:10:42.707 "raid_level": "concat", 00:10:42.707 "superblock": true, 00:10:42.707 "num_base_bdevs": 4, 00:10:42.707 "num_base_bdevs_discovered": 2, 00:10:42.707 "num_base_bdevs_operational": 4, 00:10:42.707 "base_bdevs_list": [ 00:10:42.707 { 00:10:42.707 "name": null, 00:10:42.707 "uuid": "91546fc2-374f-46b3-aac1-2673a6c2c529", 00:10:42.707 "is_configured": false, 00:10:42.707 "data_offset": 0, 00:10:42.707 "data_size": 63488 00:10:42.707 }, 00:10:42.707 { 00:10:42.707 "name": null, 00:10:42.707 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:42.707 "is_configured": false, 00:10:42.707 "data_offset": 0, 00:10:42.707 "data_size": 63488 00:10:42.707 }, 00:10:42.707 { 00:10:42.707 "name": "BaseBdev3", 00:10:42.707 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:42.707 "is_configured": true, 00:10:42.707 "data_offset": 2048, 00:10:42.707 "data_size": 63488 00:10:42.707 }, 00:10:42.707 { 00:10:42.707 "name": "BaseBdev4", 00:10:42.707 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:42.707 "is_configured": true, 00:10:42.707 "data_offset": 2048, 00:10:42.707 "data_size": 63488 00:10:42.707 } 00:10:42.707 ] 00:10:42.707 }' 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.707 18:51:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.967 [2024-11-16 18:51:26.375675] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.967 "name": "Existed_Raid", 00:10:42.967 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:42.967 "strip_size_kb": 64, 00:10:42.967 "state": "configuring", 00:10:42.967 "raid_level": "concat", 00:10:42.967 "superblock": true, 00:10:42.967 "num_base_bdevs": 4, 00:10:42.967 "num_base_bdevs_discovered": 3, 00:10:42.967 "num_base_bdevs_operational": 4, 00:10:42.967 "base_bdevs_list": [ 00:10:42.967 { 00:10:42.967 "name": null, 00:10:42.967 "uuid": "91546fc2-374f-46b3-aac1-2673a6c2c529", 00:10:42.967 "is_configured": false, 00:10:42.967 "data_offset": 0, 00:10:42.967 "data_size": 63488 00:10:42.967 }, 00:10:42.967 { 00:10:42.967 "name": "BaseBdev2", 00:10:42.967 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:42.967 "is_configured": true, 00:10:42.967 "data_offset": 2048, 00:10:42.967 "data_size": 63488 00:10:42.967 }, 00:10:42.967 { 00:10:42.967 "name": "BaseBdev3", 00:10:42.967 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:42.967 "is_configured": true, 00:10:42.967 "data_offset": 2048, 00:10:42.967 "data_size": 63488 00:10:42.967 }, 00:10:42.967 { 00:10:42.967 "name": "BaseBdev4", 00:10:42.967 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:42.967 "is_configured": true, 00:10:42.967 "data_offset": 2048, 00:10:42.967 "data_size": 63488 00:10:42.967 } 00:10:42.967 ] 00:10:42.967 }' 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.967 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.538 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 91546fc2-374f-46b3-aac1-2673a6c2c529 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.539 [2024-11-16 18:51:26.937539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:43.539 [2024-11-16 18:51:26.937859] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:43.539 [2024-11-16 18:51:26.937910] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:43.539 [2024-11-16 18:51:26.938205] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:43.539 NewBaseBdev 00:10:43.539 [2024-11-16 18:51:26.938408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:43.539 [2024-11-16 18:51:26.938456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:43.539 [2024-11-16 18:51:26.938623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.539 [ 00:10:43.539 { 00:10:43.539 "name": "NewBaseBdev", 00:10:43.539 "aliases": [ 00:10:43.539 "91546fc2-374f-46b3-aac1-2673a6c2c529" 00:10:43.539 ], 00:10:43.539 "product_name": "Malloc disk", 00:10:43.539 "block_size": 512, 00:10:43.539 "num_blocks": 65536, 00:10:43.539 "uuid": "91546fc2-374f-46b3-aac1-2673a6c2c529", 00:10:43.539 "assigned_rate_limits": { 00:10:43.539 "rw_ios_per_sec": 0, 00:10:43.539 "rw_mbytes_per_sec": 0, 00:10:43.539 "r_mbytes_per_sec": 0, 00:10:43.539 "w_mbytes_per_sec": 0 00:10:43.539 }, 00:10:43.539 "claimed": true, 00:10:43.539 "claim_type": "exclusive_write", 00:10:43.539 "zoned": false, 00:10:43.539 "supported_io_types": { 00:10:43.539 "read": true, 00:10:43.539 "write": true, 00:10:43.539 "unmap": true, 00:10:43.539 "flush": true, 00:10:43.539 "reset": true, 00:10:43.539 "nvme_admin": false, 00:10:43.539 "nvme_io": false, 00:10:43.539 "nvme_io_md": false, 00:10:43.539 "write_zeroes": true, 00:10:43.539 "zcopy": true, 00:10:43.539 "get_zone_info": false, 00:10:43.539 "zone_management": false, 00:10:43.539 "zone_append": false, 00:10:43.539 "compare": false, 00:10:43.539 "compare_and_write": false, 00:10:43.539 "abort": true, 00:10:43.539 "seek_hole": false, 00:10:43.539 "seek_data": false, 00:10:43.539 "copy": true, 00:10:43.539 "nvme_iov_md": false 00:10:43.539 }, 00:10:43.539 "memory_domains": [ 00:10:43.539 { 00:10:43.539 "dma_device_id": "system", 00:10:43.539 "dma_device_type": 1 00:10:43.539 }, 00:10:43.539 { 00:10:43.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.539 "dma_device_type": 2 00:10:43.539 } 00:10:43.539 ], 00:10:43.539 "driver_specific": {} 00:10:43.539 } 00:10:43.539 ] 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:43.539 18:51:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.539 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.539 "name": "Existed_Raid", 00:10:43.539 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:43.539 "strip_size_kb": 64, 00:10:43.539 "state": "online", 00:10:43.539 "raid_level": "concat", 00:10:43.539 "superblock": true, 00:10:43.539 "num_base_bdevs": 4, 00:10:43.539 "num_base_bdevs_discovered": 4, 00:10:43.539 "num_base_bdevs_operational": 4, 00:10:43.539 "base_bdevs_list": [ 00:10:43.539 { 00:10:43.539 "name": "NewBaseBdev", 00:10:43.539 "uuid": "91546fc2-374f-46b3-aac1-2673a6c2c529", 00:10:43.539 "is_configured": true, 00:10:43.539 "data_offset": 2048, 00:10:43.539 "data_size": 63488 00:10:43.539 }, 00:10:43.539 { 00:10:43.539 "name": "BaseBdev2", 00:10:43.539 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:43.539 "is_configured": true, 00:10:43.539 "data_offset": 2048, 00:10:43.539 "data_size": 63488 00:10:43.539 }, 00:10:43.539 { 00:10:43.539 "name": "BaseBdev3", 00:10:43.539 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:43.539 "is_configured": true, 00:10:43.539 "data_offset": 2048, 00:10:43.539 "data_size": 63488 00:10:43.539 }, 00:10:43.539 { 00:10:43.539 "name": "BaseBdev4", 00:10:43.539 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:43.539 "is_configured": true, 00:10:43.539 "data_offset": 2048, 00:10:43.539 "data_size": 63488 00:10:43.539 } 00:10:43.539 ] 00:10:43.539 }' 00:10:43.539 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.539 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:44.108 [2024-11-16 18:51:27.405140] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.108 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:44.108 "name": "Existed_Raid", 00:10:44.108 "aliases": [ 00:10:44.108 "9b1465ab-ada0-498e-acac-496aeba1c088" 00:10:44.108 ], 00:10:44.108 "product_name": "Raid Volume", 00:10:44.108 "block_size": 512, 00:10:44.108 "num_blocks": 253952, 00:10:44.108 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:44.108 "assigned_rate_limits": { 00:10:44.108 "rw_ios_per_sec": 0, 00:10:44.108 "rw_mbytes_per_sec": 0, 00:10:44.108 "r_mbytes_per_sec": 0, 00:10:44.108 "w_mbytes_per_sec": 0 00:10:44.108 }, 00:10:44.108 "claimed": false, 00:10:44.108 "zoned": false, 00:10:44.108 "supported_io_types": { 00:10:44.108 "read": true, 00:10:44.108 "write": true, 00:10:44.108 "unmap": true, 00:10:44.108 "flush": true, 00:10:44.108 "reset": true, 00:10:44.108 "nvme_admin": false, 00:10:44.108 "nvme_io": false, 00:10:44.108 "nvme_io_md": false, 00:10:44.108 "write_zeroes": true, 00:10:44.108 "zcopy": false, 00:10:44.108 "get_zone_info": false, 00:10:44.108 "zone_management": false, 00:10:44.108 "zone_append": false, 00:10:44.108 "compare": false, 00:10:44.108 "compare_and_write": false, 00:10:44.108 "abort": false, 00:10:44.108 "seek_hole": false, 00:10:44.108 "seek_data": false, 00:10:44.108 "copy": false, 00:10:44.108 "nvme_iov_md": false 00:10:44.108 }, 00:10:44.108 "memory_domains": [ 00:10:44.108 { 00:10:44.108 "dma_device_id": "system", 00:10:44.108 "dma_device_type": 1 00:10:44.108 }, 00:10:44.108 { 00:10:44.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.108 "dma_device_type": 2 00:10:44.108 }, 00:10:44.108 { 00:10:44.108 "dma_device_id": "system", 00:10:44.108 "dma_device_type": 1 00:10:44.108 }, 00:10:44.108 { 00:10:44.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.108 "dma_device_type": 2 00:10:44.108 }, 00:10:44.108 { 00:10:44.109 "dma_device_id": "system", 00:10:44.109 "dma_device_type": 1 00:10:44.109 }, 00:10:44.109 { 00:10:44.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.109 "dma_device_type": 2 00:10:44.109 }, 00:10:44.109 { 00:10:44.109 "dma_device_id": "system", 00:10:44.109 "dma_device_type": 1 00:10:44.109 }, 00:10:44.109 { 00:10:44.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.109 "dma_device_type": 2 00:10:44.109 } 00:10:44.109 ], 00:10:44.109 "driver_specific": { 00:10:44.109 "raid": { 00:10:44.109 "uuid": "9b1465ab-ada0-498e-acac-496aeba1c088", 00:10:44.109 "strip_size_kb": 64, 00:10:44.109 "state": "online", 00:10:44.109 "raid_level": "concat", 00:10:44.109 "superblock": true, 00:10:44.109 "num_base_bdevs": 4, 00:10:44.109 "num_base_bdevs_discovered": 4, 00:10:44.109 "num_base_bdevs_operational": 4, 00:10:44.109 "base_bdevs_list": [ 00:10:44.109 { 00:10:44.109 "name": "NewBaseBdev", 00:10:44.109 "uuid": "91546fc2-374f-46b3-aac1-2673a6c2c529", 00:10:44.109 "is_configured": true, 00:10:44.109 "data_offset": 2048, 00:10:44.109 "data_size": 63488 00:10:44.109 }, 00:10:44.109 { 00:10:44.109 "name": "BaseBdev2", 00:10:44.109 "uuid": "8ba563aa-b7db-4fe6-9a8b-e998e8d07241", 00:10:44.109 "is_configured": true, 00:10:44.109 "data_offset": 2048, 00:10:44.109 "data_size": 63488 00:10:44.109 }, 00:10:44.109 { 00:10:44.109 "name": "BaseBdev3", 00:10:44.109 "uuid": "57949278-31e9-4d2b-a0c8-3bdd702854cb", 00:10:44.109 "is_configured": true, 00:10:44.109 "data_offset": 2048, 00:10:44.109 "data_size": 63488 00:10:44.109 }, 00:10:44.109 { 00:10:44.109 "name": "BaseBdev4", 00:10:44.109 "uuid": "eb654df7-cb90-48a1-b738-3a3315656796", 00:10:44.109 "is_configured": true, 00:10:44.109 "data_offset": 2048, 00:10:44.109 "data_size": 63488 00:10:44.109 } 00:10:44.109 ] 00:10:44.109 } 00:10:44.109 } 00:10:44.109 }' 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:44.109 BaseBdev2 00:10:44.109 BaseBdev3 00:10:44.109 BaseBdev4' 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.109 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.369 [2024-11-16 18:51:27.668333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:44.369 [2024-11-16 18:51:27.668361] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.369 [2024-11-16 18:51:27.668436] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.369 [2024-11-16 18:51:27.668507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.369 [2024-11-16 18:51:27.668517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71727 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71727 ']' 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71727 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71727 00:10:44.369 killing process with pid 71727 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71727' 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71727 00:10:44.369 [2024-11-16 18:51:27.704108] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.369 18:51:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71727 00:10:44.938 [2024-11-16 18:51:28.105725] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.881 ************************************ 00:10:45.881 END TEST raid_state_function_test_sb 00:10:45.881 ************************************ 00:10:45.881 18:51:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:45.881 00:10:45.881 real 0m11.159s 00:10:45.881 user 0m17.717s 00:10:45.881 sys 0m1.933s 00:10:45.881 18:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.881 18:51:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.881 18:51:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:10:45.881 18:51:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:45.881 18:51:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.881 18:51:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.881 ************************************ 00:10:45.881 START TEST raid_superblock_test 00:10:45.881 ************************************ 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72391 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72391 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72391 ']' 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.881 18:51:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.140 [2024-11-16 18:51:29.355341] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:46.140 [2024-11-16 18:51:29.355546] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72391 ] 00:10:46.140 [2024-11-16 18:51:29.533146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.401 [2024-11-16 18:51:29.647897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.401 [2024-11-16 18:51:29.847678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.401 [2024-11-16 18:51:29.847801] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.971 malloc1 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.971 [2024-11-16 18:51:30.242065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:46.971 [2024-11-16 18:51:30.242132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.971 [2024-11-16 18:51:30.242152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:46.971 [2024-11-16 18:51:30.242161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.971 [2024-11-16 18:51:30.244321] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.971 [2024-11-16 18:51:30.244415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:46.971 pt1 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.971 malloc2 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.971 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.972 [2024-11-16 18:51:30.296957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.972 [2024-11-16 18:51:30.297056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.972 [2024-11-16 18:51:30.297095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:46.972 [2024-11-16 18:51:30.297122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.972 [2024-11-16 18:51:30.299214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.972 [2024-11-16 18:51:30.299283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.972 pt2 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.972 malloc3 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.972 [2024-11-16 18:51:30.374326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.972 [2024-11-16 18:51:30.374421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.972 [2024-11-16 18:51:30.374458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:46.972 [2024-11-16 18:51:30.374485] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.972 [2024-11-16 18:51:30.376584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.972 [2024-11-16 18:51:30.376666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.972 pt3 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.972 malloc4 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.972 [2024-11-16 18:51:30.430583] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:46.972 [2024-11-16 18:51:30.430717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.972 [2024-11-16 18:51:30.430759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:46.972 [2024-11-16 18:51:30.430769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.972 [2024-11-16 18:51:30.432891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.972 [2024-11-16 18:51:30.432929] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:46.972 pt4 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.972 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.231 [2024-11-16 18:51:30.442591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.231 [2024-11-16 18:51:30.444452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.231 [2024-11-16 18:51:30.444556] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:47.231 [2024-11-16 18:51:30.444657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:47.231 [2024-11-16 18:51:30.444902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:47.231 [2024-11-16 18:51:30.444947] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.231 [2024-11-16 18:51:30.445204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.231 [2024-11-16 18:51:30.445405] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:47.231 [2024-11-16 18:51:30.445450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:47.231 [2024-11-16 18:51:30.445627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.231 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.231 "name": "raid_bdev1", 00:10:47.231 "uuid": "9e602c77-79e8-4057-9f08-36100140b443", 00:10:47.231 "strip_size_kb": 64, 00:10:47.231 "state": "online", 00:10:47.231 "raid_level": "concat", 00:10:47.231 "superblock": true, 00:10:47.231 "num_base_bdevs": 4, 00:10:47.231 "num_base_bdevs_discovered": 4, 00:10:47.231 "num_base_bdevs_operational": 4, 00:10:47.231 "base_bdevs_list": [ 00:10:47.231 { 00:10:47.232 "name": "pt1", 00:10:47.232 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.232 "is_configured": true, 00:10:47.232 "data_offset": 2048, 00:10:47.232 "data_size": 63488 00:10:47.232 }, 00:10:47.232 { 00:10:47.232 "name": "pt2", 00:10:47.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.232 "is_configured": true, 00:10:47.232 "data_offset": 2048, 00:10:47.232 "data_size": 63488 00:10:47.232 }, 00:10:47.232 { 00:10:47.232 "name": "pt3", 00:10:47.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.232 "is_configured": true, 00:10:47.232 "data_offset": 2048, 00:10:47.232 "data_size": 63488 00:10:47.232 }, 00:10:47.232 { 00:10:47.232 "name": "pt4", 00:10:47.232 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:47.232 "is_configured": true, 00:10:47.232 "data_offset": 2048, 00:10:47.232 "data_size": 63488 00:10:47.232 } 00:10:47.232 ] 00:10:47.232 }' 00:10:47.232 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.232 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.491 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.491 [2024-11-16 18:51:30.950052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.867 18:51:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.867 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.867 "name": "raid_bdev1", 00:10:47.867 "aliases": [ 00:10:47.867 "9e602c77-79e8-4057-9f08-36100140b443" 00:10:47.867 ], 00:10:47.867 "product_name": "Raid Volume", 00:10:47.867 "block_size": 512, 00:10:47.867 "num_blocks": 253952, 00:10:47.867 "uuid": "9e602c77-79e8-4057-9f08-36100140b443", 00:10:47.867 "assigned_rate_limits": { 00:10:47.867 "rw_ios_per_sec": 0, 00:10:47.867 "rw_mbytes_per_sec": 0, 00:10:47.867 "r_mbytes_per_sec": 0, 00:10:47.867 "w_mbytes_per_sec": 0 00:10:47.867 }, 00:10:47.867 "claimed": false, 00:10:47.867 "zoned": false, 00:10:47.867 "supported_io_types": { 00:10:47.867 "read": true, 00:10:47.867 "write": true, 00:10:47.867 "unmap": true, 00:10:47.867 "flush": true, 00:10:47.867 "reset": true, 00:10:47.867 "nvme_admin": false, 00:10:47.867 "nvme_io": false, 00:10:47.867 "nvme_io_md": false, 00:10:47.867 "write_zeroes": true, 00:10:47.867 "zcopy": false, 00:10:47.867 "get_zone_info": false, 00:10:47.867 "zone_management": false, 00:10:47.867 "zone_append": false, 00:10:47.867 "compare": false, 00:10:47.867 "compare_and_write": false, 00:10:47.867 "abort": false, 00:10:47.867 "seek_hole": false, 00:10:47.867 "seek_data": false, 00:10:47.867 "copy": false, 00:10:47.867 "nvme_iov_md": false 00:10:47.867 }, 00:10:47.867 "memory_domains": [ 00:10:47.867 { 00:10:47.867 "dma_device_id": "system", 00:10:47.867 "dma_device_type": 1 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.867 "dma_device_type": 2 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "dma_device_id": "system", 00:10:47.867 "dma_device_type": 1 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.867 "dma_device_type": 2 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "dma_device_id": "system", 00:10:47.867 "dma_device_type": 1 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.867 "dma_device_type": 2 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "dma_device_id": "system", 00:10:47.867 "dma_device_type": 1 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.867 "dma_device_type": 2 00:10:47.867 } 00:10:47.867 ], 00:10:47.867 "driver_specific": { 00:10:47.867 "raid": { 00:10:47.867 "uuid": "9e602c77-79e8-4057-9f08-36100140b443", 00:10:47.867 "strip_size_kb": 64, 00:10:47.867 "state": "online", 00:10:47.867 "raid_level": "concat", 00:10:47.867 "superblock": true, 00:10:47.867 "num_base_bdevs": 4, 00:10:47.867 "num_base_bdevs_discovered": 4, 00:10:47.867 "num_base_bdevs_operational": 4, 00:10:47.867 "base_bdevs_list": [ 00:10:47.867 { 00:10:47.867 "name": "pt1", 00:10:47.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:47.867 "is_configured": true, 00:10:47.867 "data_offset": 2048, 00:10:47.867 "data_size": 63488 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "name": "pt2", 00:10:47.867 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.867 "is_configured": true, 00:10:47.867 "data_offset": 2048, 00:10:47.867 "data_size": 63488 00:10:47.867 }, 00:10:47.867 { 00:10:47.867 "name": "pt3", 00:10:47.867 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.867 "is_configured": true, 00:10:47.868 "data_offset": 2048, 00:10:47.868 "data_size": 63488 00:10:47.868 }, 00:10:47.868 { 00:10:47.868 "name": "pt4", 00:10:47.868 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:47.868 "is_configured": true, 00:10:47.868 "data_offset": 2048, 00:10:47.868 "data_size": 63488 00:10:47.868 } 00:10:47.868 ] 00:10:47.868 } 00:10:47.868 } 00:10:47.868 }' 00:10:47.868 18:51:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:47.868 pt2 00:10:47.868 pt3 00:10:47.868 pt4' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.868 [2024-11-16 18:51:31.261438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9e602c77-79e8-4057-9f08-36100140b443 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9e602c77-79e8-4057-9f08-36100140b443 ']' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.868 [2024-11-16 18:51:31.293075] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.868 [2024-11-16 18:51:31.293102] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.868 [2024-11-16 18:51:31.293180] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.868 [2024-11-16 18:51:31.293247] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.868 [2024-11-16 18:51:31.293266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.868 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 [2024-11-16 18:51:31.444848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:48.129 [2024-11-16 18:51:31.446778] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:48.129 [2024-11-16 18:51:31.446828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:48.129 [2024-11-16 18:51:31.446861] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:48.129 [2024-11-16 18:51:31.446910] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:48.129 [2024-11-16 18:51:31.446959] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:48.129 [2024-11-16 18:51:31.446979] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:48.129 [2024-11-16 18:51:31.446997] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:48.129 [2024-11-16 18:51:31.447011] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:48.129 [2024-11-16 18:51:31.447022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:48.129 request: 00:10:48.129 { 00:10:48.129 "name": "raid_bdev1", 00:10:48.129 "raid_level": "concat", 00:10:48.129 "base_bdevs": [ 00:10:48.129 "malloc1", 00:10:48.129 "malloc2", 00:10:48.129 "malloc3", 00:10:48.129 "malloc4" 00:10:48.129 ], 00:10:48.129 "strip_size_kb": 64, 00:10:48.129 "superblock": false, 00:10:48.129 "method": "bdev_raid_create", 00:10:48.129 "req_id": 1 00:10:48.129 } 00:10:48.129 Got JSON-RPC error response 00:10:48.129 response: 00:10:48.129 { 00:10:48.129 "code": -17, 00:10:48.129 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:48.129 } 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 [2024-11-16 18:51:31.500724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:48.129 [2024-11-16 18:51:31.500775] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.129 [2024-11-16 18:51:31.500791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:48.129 [2024-11-16 18:51:31.500801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.129 [2024-11-16 18:51:31.502914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.129 [2024-11-16 18:51:31.502952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:48.129 [2024-11-16 18:51:31.503023] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:48.129 [2024-11-16 18:51:31.503098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:48.129 pt1 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.129 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.129 "name": "raid_bdev1", 00:10:48.129 "uuid": "9e602c77-79e8-4057-9f08-36100140b443", 00:10:48.129 "strip_size_kb": 64, 00:10:48.129 "state": "configuring", 00:10:48.129 "raid_level": "concat", 00:10:48.129 "superblock": true, 00:10:48.129 "num_base_bdevs": 4, 00:10:48.129 "num_base_bdevs_discovered": 1, 00:10:48.129 "num_base_bdevs_operational": 4, 00:10:48.129 "base_bdevs_list": [ 00:10:48.129 { 00:10:48.129 "name": "pt1", 00:10:48.129 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.129 "is_configured": true, 00:10:48.129 "data_offset": 2048, 00:10:48.129 "data_size": 63488 00:10:48.129 }, 00:10:48.129 { 00:10:48.129 "name": null, 00:10:48.130 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.130 "is_configured": false, 00:10:48.130 "data_offset": 2048, 00:10:48.130 "data_size": 63488 00:10:48.130 }, 00:10:48.130 { 00:10:48.130 "name": null, 00:10:48.130 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.130 "is_configured": false, 00:10:48.130 "data_offset": 2048, 00:10:48.130 "data_size": 63488 00:10:48.130 }, 00:10:48.130 { 00:10:48.130 "name": null, 00:10:48.130 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.130 "is_configured": false, 00:10:48.130 "data_offset": 2048, 00:10:48.130 "data_size": 63488 00:10:48.130 } 00:10:48.130 ] 00:10:48.130 }' 00:10:48.130 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.130 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.700 [2024-11-16 18:51:31.884082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.700 [2024-11-16 18:51:31.884174] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.700 [2024-11-16 18:51:31.884193] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:48.700 [2024-11-16 18:51:31.884205] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.700 [2024-11-16 18:51:31.884640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.700 [2024-11-16 18:51:31.884677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.700 [2024-11-16 18:51:31.884757] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:48.700 [2024-11-16 18:51:31.884791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.700 pt2 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.700 [2024-11-16 18:51:31.896062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.700 "name": "raid_bdev1", 00:10:48.700 "uuid": "9e602c77-79e8-4057-9f08-36100140b443", 00:10:48.700 "strip_size_kb": 64, 00:10:48.700 "state": "configuring", 00:10:48.700 "raid_level": "concat", 00:10:48.700 "superblock": true, 00:10:48.700 "num_base_bdevs": 4, 00:10:48.700 "num_base_bdevs_discovered": 1, 00:10:48.700 "num_base_bdevs_operational": 4, 00:10:48.700 "base_bdevs_list": [ 00:10:48.700 { 00:10:48.700 "name": "pt1", 00:10:48.700 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.700 "is_configured": true, 00:10:48.700 "data_offset": 2048, 00:10:48.700 "data_size": 63488 00:10:48.700 }, 00:10:48.700 { 00:10:48.700 "name": null, 00:10:48.700 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.700 "is_configured": false, 00:10:48.700 "data_offset": 0, 00:10:48.700 "data_size": 63488 00:10:48.700 }, 00:10:48.700 { 00:10:48.700 "name": null, 00:10:48.700 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.700 "is_configured": false, 00:10:48.700 "data_offset": 2048, 00:10:48.700 "data_size": 63488 00:10:48.700 }, 00:10:48.700 { 00:10:48.700 "name": null, 00:10:48.700 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.700 "is_configured": false, 00:10:48.700 "data_offset": 2048, 00:10:48.700 "data_size": 63488 00:10:48.700 } 00:10:48.700 ] 00:10:48.700 }' 00:10:48.700 18:51:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.701 18:51:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.960 [2024-11-16 18:51:32.275468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:48.960 [2024-11-16 18:51:32.275536] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.960 [2024-11-16 18:51:32.275557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:48.960 [2024-11-16 18:51:32.275566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.960 [2024-11-16 18:51:32.276051] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.960 [2024-11-16 18:51:32.276078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:48.960 [2024-11-16 18:51:32.276165] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:48.960 [2024-11-16 18:51:32.276193] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:48.960 pt2 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.960 [2024-11-16 18:51:32.287414] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:48.960 [2024-11-16 18:51:32.287465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.960 [2024-11-16 18:51:32.287487] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:48.960 [2024-11-16 18:51:32.287496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.960 [2024-11-16 18:51:32.287901] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.960 [2024-11-16 18:51:32.287926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:48.960 [2024-11-16 18:51:32.287987] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:48.960 [2024-11-16 18:51:32.288014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:48.960 pt3 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.960 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.960 [2024-11-16 18:51:32.299369] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:48.960 [2024-11-16 18:51:32.299420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:48.960 [2024-11-16 18:51:32.299453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:48.960 [2024-11-16 18:51:32.299460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:48.960 [2024-11-16 18:51:32.299801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:48.961 [2024-11-16 18:51:32.299847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:48.961 [2024-11-16 18:51:32.299907] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:48.961 [2024-11-16 18:51:32.299924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:48.961 [2024-11-16 18:51:32.300043] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:48.961 [2024-11-16 18:51:32.300060] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:48.961 [2024-11-16 18:51:32.300299] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:48.961 [2024-11-16 18:51:32.300459] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:48.961 [2024-11-16 18:51:32.300479] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:48.961 [2024-11-16 18:51:32.300611] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.961 pt4 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.961 "name": "raid_bdev1", 00:10:48.961 "uuid": "9e602c77-79e8-4057-9f08-36100140b443", 00:10:48.961 "strip_size_kb": 64, 00:10:48.961 "state": "online", 00:10:48.961 "raid_level": "concat", 00:10:48.961 "superblock": true, 00:10:48.961 "num_base_bdevs": 4, 00:10:48.961 "num_base_bdevs_discovered": 4, 00:10:48.961 "num_base_bdevs_operational": 4, 00:10:48.961 "base_bdevs_list": [ 00:10:48.961 { 00:10:48.961 "name": "pt1", 00:10:48.961 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:48.961 "is_configured": true, 00:10:48.961 "data_offset": 2048, 00:10:48.961 "data_size": 63488 00:10:48.961 }, 00:10:48.961 { 00:10:48.961 "name": "pt2", 00:10:48.961 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.961 "is_configured": true, 00:10:48.961 "data_offset": 2048, 00:10:48.961 "data_size": 63488 00:10:48.961 }, 00:10:48.961 { 00:10:48.961 "name": "pt3", 00:10:48.961 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.961 "is_configured": true, 00:10:48.961 "data_offset": 2048, 00:10:48.961 "data_size": 63488 00:10:48.961 }, 00:10:48.961 { 00:10:48.961 "name": "pt4", 00:10:48.961 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:48.961 "is_configured": true, 00:10:48.961 "data_offset": 2048, 00:10:48.961 "data_size": 63488 00:10:48.961 } 00:10:48.961 ] 00:10:48.961 }' 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.961 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.529 [2024-11-16 18:51:32.758989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.529 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.529 "name": "raid_bdev1", 00:10:49.529 "aliases": [ 00:10:49.529 "9e602c77-79e8-4057-9f08-36100140b443" 00:10:49.529 ], 00:10:49.529 "product_name": "Raid Volume", 00:10:49.529 "block_size": 512, 00:10:49.529 "num_blocks": 253952, 00:10:49.529 "uuid": "9e602c77-79e8-4057-9f08-36100140b443", 00:10:49.529 "assigned_rate_limits": { 00:10:49.529 "rw_ios_per_sec": 0, 00:10:49.529 "rw_mbytes_per_sec": 0, 00:10:49.529 "r_mbytes_per_sec": 0, 00:10:49.529 "w_mbytes_per_sec": 0 00:10:49.529 }, 00:10:49.529 "claimed": false, 00:10:49.529 "zoned": false, 00:10:49.529 "supported_io_types": { 00:10:49.529 "read": true, 00:10:49.529 "write": true, 00:10:49.529 "unmap": true, 00:10:49.529 "flush": true, 00:10:49.529 "reset": true, 00:10:49.529 "nvme_admin": false, 00:10:49.529 "nvme_io": false, 00:10:49.529 "nvme_io_md": false, 00:10:49.529 "write_zeroes": true, 00:10:49.529 "zcopy": false, 00:10:49.529 "get_zone_info": false, 00:10:49.529 "zone_management": false, 00:10:49.529 "zone_append": false, 00:10:49.529 "compare": false, 00:10:49.529 "compare_and_write": false, 00:10:49.529 "abort": false, 00:10:49.529 "seek_hole": false, 00:10:49.529 "seek_data": false, 00:10:49.529 "copy": false, 00:10:49.529 "nvme_iov_md": false 00:10:49.529 }, 00:10:49.529 "memory_domains": [ 00:10:49.529 { 00:10:49.529 "dma_device_id": "system", 00:10:49.529 "dma_device_type": 1 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.530 "dma_device_type": 2 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "dma_device_id": "system", 00:10:49.530 "dma_device_type": 1 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.530 "dma_device_type": 2 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "dma_device_id": "system", 00:10:49.530 "dma_device_type": 1 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.530 "dma_device_type": 2 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "dma_device_id": "system", 00:10:49.530 "dma_device_type": 1 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.530 "dma_device_type": 2 00:10:49.530 } 00:10:49.530 ], 00:10:49.530 "driver_specific": { 00:10:49.530 "raid": { 00:10:49.530 "uuid": "9e602c77-79e8-4057-9f08-36100140b443", 00:10:49.530 "strip_size_kb": 64, 00:10:49.530 "state": "online", 00:10:49.530 "raid_level": "concat", 00:10:49.530 "superblock": true, 00:10:49.530 "num_base_bdevs": 4, 00:10:49.530 "num_base_bdevs_discovered": 4, 00:10:49.530 "num_base_bdevs_operational": 4, 00:10:49.530 "base_bdevs_list": [ 00:10:49.530 { 00:10:49.530 "name": "pt1", 00:10:49.530 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:49.530 "is_configured": true, 00:10:49.530 "data_offset": 2048, 00:10:49.530 "data_size": 63488 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "name": "pt2", 00:10:49.530 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:49.530 "is_configured": true, 00:10:49.530 "data_offset": 2048, 00:10:49.530 "data_size": 63488 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "name": "pt3", 00:10:49.530 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:49.530 "is_configured": true, 00:10:49.530 "data_offset": 2048, 00:10:49.530 "data_size": 63488 00:10:49.530 }, 00:10:49.530 { 00:10:49.530 "name": "pt4", 00:10:49.530 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:49.530 "is_configured": true, 00:10:49.530 "data_offset": 2048, 00:10:49.530 "data_size": 63488 00:10:49.530 } 00:10:49.530 ] 00:10:49.530 } 00:10:49.530 } 00:10:49.530 }' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:49.530 pt2 00:10:49.530 pt3 00:10:49.530 pt4' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.530 18:51:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:49.789 [2024-11-16 18:51:33.070377] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9e602c77-79e8-4057-9f08-36100140b443 '!=' 9e602c77-79e8-4057-9f08-36100140b443 ']' 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72391 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72391 ']' 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72391 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72391 00:10:49.789 killing process with pid 72391 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72391' 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72391 00:10:49.789 [2024-11-16 18:51:33.154538] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:49.789 [2024-11-16 18:51:33.154633] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.789 [2024-11-16 18:51:33.154718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:49.789 [2024-11-16 18:51:33.154729] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:49.789 18:51:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72391 00:10:50.358 [2024-11-16 18:51:33.553841] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:51.297 ************************************ 00:10:51.297 END TEST raid_superblock_test 00:10:51.297 ************************************ 00:10:51.297 18:51:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:51.297 00:10:51.297 real 0m5.396s 00:10:51.297 user 0m7.632s 00:10:51.297 sys 0m0.960s 00:10:51.297 18:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:51.297 18:51:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.297 18:51:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:10:51.297 18:51:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:51.297 18:51:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:51.297 18:51:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:51.297 ************************************ 00:10:51.297 START TEST raid_read_error_test 00:10:51.297 ************************************ 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:51.297 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Duk8I1t3HY 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72658 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72658 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72658 ']' 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.298 18:51:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:51.558 [2024-11-16 18:51:34.828975] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:51.558 [2024-11-16 18:51:34.829096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72658 ] 00:10:51.558 [2024-11-16 18:51:35.001174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.818 [2024-11-16 18:51:35.119909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:52.078 [2024-11-16 18:51:35.324147] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.078 [2024-11-16 18:51:35.324197] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 BaseBdev1_malloc 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 true 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 [2024-11-16 18:51:35.713486] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:52.338 [2024-11-16 18:51:35.713549] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.338 [2024-11-16 18:51:35.713569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:52.338 [2024-11-16 18:51:35.713580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.338 [2024-11-16 18:51:35.715702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.338 [2024-11-16 18:51:35.715739] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:52.338 BaseBdev1 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 BaseBdev2_malloc 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 true 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 [2024-11-16 18:51:35.768838] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:52.338 [2024-11-16 18:51:35.768902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.338 [2024-11-16 18:51:35.768918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:52.338 [2024-11-16 18:51:35.768929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.338 [2024-11-16 18:51:35.770981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.338 [2024-11-16 18:51:35.771019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:52.338 BaseBdev2 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.338 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 BaseBdev3_malloc 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 true 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 [2024-11-16 18:51:35.832226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:52.599 [2024-11-16 18:51:35.832286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.599 [2024-11-16 18:51:35.832320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:52.599 [2024-11-16 18:51:35.832330] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.599 [2024-11-16 18:51:35.834375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.599 [2024-11-16 18:51:35.834415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:52.599 BaseBdev3 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 BaseBdev4_malloc 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 true 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 [2024-11-16 18:51:35.889447] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:52.599 [2024-11-16 18:51:35.889507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:52.599 [2024-11-16 18:51:35.889525] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:52.599 [2024-11-16 18:51:35.889535] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:52.599 [2024-11-16 18:51:35.891605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:52.599 [2024-11-16 18:51:35.891645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:52.599 BaseBdev4 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 [2024-11-16 18:51:35.897492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:52.599 [2024-11-16 18:51:35.899316] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.599 [2024-11-16 18:51:35.899388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.599 [2024-11-16 18:51:35.899452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:52.599 [2024-11-16 18:51:35.899671] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:52.599 [2024-11-16 18:51:35.899689] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:52.599 [2024-11-16 18:51:35.899939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:52.599 [2024-11-16 18:51:35.900104] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:52.599 [2024-11-16 18:51:35.900119] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:52.599 [2024-11-16 18:51:35.900260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.599 "name": "raid_bdev1", 00:10:52.599 "uuid": "264f012c-12de-4bc9-b381-f2c3e2194c47", 00:10:52.599 "strip_size_kb": 64, 00:10:52.599 "state": "online", 00:10:52.599 "raid_level": "concat", 00:10:52.599 "superblock": true, 00:10:52.599 "num_base_bdevs": 4, 00:10:52.599 "num_base_bdevs_discovered": 4, 00:10:52.599 "num_base_bdevs_operational": 4, 00:10:52.599 "base_bdevs_list": [ 00:10:52.599 { 00:10:52.599 "name": "BaseBdev1", 00:10:52.599 "uuid": "5634bea7-ffed-5267-a6a6-239fcc4a770c", 00:10:52.599 "is_configured": true, 00:10:52.599 "data_offset": 2048, 00:10:52.599 "data_size": 63488 00:10:52.599 }, 00:10:52.599 { 00:10:52.599 "name": "BaseBdev2", 00:10:52.599 "uuid": "9dddbcfe-9343-5d70-8025-59ec288d0116", 00:10:52.599 "is_configured": true, 00:10:52.599 "data_offset": 2048, 00:10:52.599 "data_size": 63488 00:10:52.599 }, 00:10:52.599 { 00:10:52.599 "name": "BaseBdev3", 00:10:52.599 "uuid": "888da2a5-8719-5e48-ab7b-d523aaeb2a47", 00:10:52.599 "is_configured": true, 00:10:52.599 "data_offset": 2048, 00:10:52.599 "data_size": 63488 00:10:52.599 }, 00:10:52.599 { 00:10:52.599 "name": "BaseBdev4", 00:10:52.599 "uuid": "fae34cae-bab8-55a6-99f6-78f84e31c233", 00:10:52.599 "is_configured": true, 00:10:52.599 "data_offset": 2048, 00:10:52.599 "data_size": 63488 00:10:52.599 } 00:10:52.599 ] 00:10:52.599 }' 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.599 18:51:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.181 18:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:53.181 18:51:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:53.181 [2024-11-16 18:51:36.461885] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:54.121 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:54.121 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.121 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.121 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.121 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.122 "name": "raid_bdev1", 00:10:54.122 "uuid": "264f012c-12de-4bc9-b381-f2c3e2194c47", 00:10:54.122 "strip_size_kb": 64, 00:10:54.122 "state": "online", 00:10:54.122 "raid_level": "concat", 00:10:54.122 "superblock": true, 00:10:54.122 "num_base_bdevs": 4, 00:10:54.122 "num_base_bdevs_discovered": 4, 00:10:54.122 "num_base_bdevs_operational": 4, 00:10:54.122 "base_bdevs_list": [ 00:10:54.122 { 00:10:54.122 "name": "BaseBdev1", 00:10:54.122 "uuid": "5634bea7-ffed-5267-a6a6-239fcc4a770c", 00:10:54.122 "is_configured": true, 00:10:54.122 "data_offset": 2048, 00:10:54.122 "data_size": 63488 00:10:54.122 }, 00:10:54.122 { 00:10:54.122 "name": "BaseBdev2", 00:10:54.122 "uuid": "9dddbcfe-9343-5d70-8025-59ec288d0116", 00:10:54.122 "is_configured": true, 00:10:54.122 "data_offset": 2048, 00:10:54.122 "data_size": 63488 00:10:54.122 }, 00:10:54.122 { 00:10:54.122 "name": "BaseBdev3", 00:10:54.122 "uuid": "888da2a5-8719-5e48-ab7b-d523aaeb2a47", 00:10:54.122 "is_configured": true, 00:10:54.122 "data_offset": 2048, 00:10:54.122 "data_size": 63488 00:10:54.122 }, 00:10:54.122 { 00:10:54.122 "name": "BaseBdev4", 00:10:54.122 "uuid": "fae34cae-bab8-55a6-99f6-78f84e31c233", 00:10:54.122 "is_configured": true, 00:10:54.122 "data_offset": 2048, 00:10:54.122 "data_size": 63488 00:10:54.122 } 00:10:54.122 ] 00:10:54.122 }' 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.122 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:54.692 [2024-11-16 18:51:37.878462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:54.692 [2024-11-16 18:51:37.878500] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.692 [2024-11-16 18:51:37.881197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.692 [2024-11-16 18:51:37.881278] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.692 [2024-11-16 18:51:37.881322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:54.692 [2024-11-16 18:51:37.881335] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72658 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72658 ']' 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72658 00:10:54.692 { 00:10:54.692 "results": [ 00:10:54.692 { 00:10:54.692 "job": "raid_bdev1", 00:10:54.692 "core_mask": "0x1", 00:10:54.692 "workload": "randrw", 00:10:54.692 "percentage": 50, 00:10:54.692 "status": "finished", 00:10:54.692 "queue_depth": 1, 00:10:54.692 "io_size": 131072, 00:10:54.692 "runtime": 1.417546, 00:10:54.692 "iops": 15595.261106165162, 00:10:54.692 "mibps": 1949.4076382706453, 00:10:54.692 "io_failed": 1, 00:10:54.692 "io_timeout": 0, 00:10:54.692 "avg_latency_us": 89.10155228441877, 00:10:54.692 "min_latency_us": 26.606113537117903, 00:10:54.692 "max_latency_us": 1430.9170305676855 00:10:54.692 } 00:10:54.692 ], 00:10:54.692 "core_count": 1 00:10:54.692 } 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72658 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.692 killing process with pid 72658 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72658' 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72658 00:10:54.692 18:51:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72658 00:10:54.692 [2024-11-16 18:51:37.921534] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:54.952 [2024-11-16 18:51:38.245646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Duk8I1t3HY 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:56.335 00:10:56.335 real 0m4.705s 00:10:56.335 user 0m5.567s 00:10:56.335 sys 0m0.593s 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.335 18:51:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.335 ************************************ 00:10:56.335 END TEST raid_read_error_test 00:10:56.335 ************************************ 00:10:56.335 18:51:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:10:56.335 18:51:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:56.335 18:51:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.335 18:51:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:56.335 ************************************ 00:10:56.335 START TEST raid_write_error_test 00:10:56.335 ************************************ 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.m4jVONEkx5 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72805 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72805 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72805 ']' 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:56.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:56.335 18:51:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.335 [2024-11-16 18:51:39.606965] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:56.335 [2024-11-16 18:51:39.607090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72805 ] 00:10:56.335 [2024-11-16 18:51:39.781421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.595 [2024-11-16 18:51:39.897715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:56.855 [2024-11-16 18:51:40.115477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.855 [2024-11-16 18:51:40.115519] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.115 BaseBdev1_malloc 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.115 true 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.115 [2024-11-16 18:51:40.500679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:57.115 [2024-11-16 18:51:40.500758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.115 [2024-11-16 18:51:40.500778] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:57.115 [2024-11-16 18:51:40.500789] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.115 [2024-11-16 18:51:40.502941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.115 [2024-11-16 18:51:40.502980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:57.115 BaseBdev1 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.115 BaseBdev2_malloc 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.115 true 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.115 [2024-11-16 18:51:40.562989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:57.115 [2024-11-16 18:51:40.563048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.115 [2024-11-16 18:51:40.563064] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:57.115 [2024-11-16 18:51:40.563074] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.115 [2024-11-16 18:51:40.565166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.115 [2024-11-16 18:51:40.565208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:57.115 BaseBdev2 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.115 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 BaseBdev3_malloc 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 true 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 [2024-11-16 18:51:40.641426] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:57.376 [2024-11-16 18:51:40.641487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.376 [2024-11-16 18:51:40.641508] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:57.376 [2024-11-16 18:51:40.641521] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.376 [2024-11-16 18:51:40.643862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.376 [2024-11-16 18:51:40.643904] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:57.376 BaseBdev3 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 BaseBdev4_malloc 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 true 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 [2024-11-16 18:51:40.708104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:10:57.376 [2024-11-16 18:51:40.708163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.376 [2024-11-16 18:51:40.708197] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:57.376 [2024-11-16 18:51:40.708208] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.376 [2024-11-16 18:51:40.710267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.376 [2024-11-16 18:51:40.710320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:10:57.376 BaseBdev4 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.376 [2024-11-16 18:51:40.720144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:57.376 [2024-11-16 18:51:40.721970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:57.376 [2024-11-16 18:51:40.722045] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:57.376 [2024-11-16 18:51:40.722108] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:57.376 [2024-11-16 18:51:40.722349] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:10:57.376 [2024-11-16 18:51:40.722371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:57.376 [2024-11-16 18:51:40.722610] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:10:57.376 [2024-11-16 18:51:40.722796] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:10:57.376 [2024-11-16 18:51:40.722815] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:10:57.376 [2024-11-16 18:51:40.722978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.376 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.377 "name": "raid_bdev1", 00:10:57.377 "uuid": "f35c90a1-5fb0-4cb2-b381-d9182b887793", 00:10:57.377 "strip_size_kb": 64, 00:10:57.377 "state": "online", 00:10:57.377 "raid_level": "concat", 00:10:57.377 "superblock": true, 00:10:57.377 "num_base_bdevs": 4, 00:10:57.377 "num_base_bdevs_discovered": 4, 00:10:57.377 "num_base_bdevs_operational": 4, 00:10:57.377 "base_bdevs_list": [ 00:10:57.377 { 00:10:57.377 "name": "BaseBdev1", 00:10:57.377 "uuid": "fcd29e3c-7290-5fc8-a211-0a557a3984d1", 00:10:57.377 "is_configured": true, 00:10:57.377 "data_offset": 2048, 00:10:57.377 "data_size": 63488 00:10:57.377 }, 00:10:57.377 { 00:10:57.377 "name": "BaseBdev2", 00:10:57.377 "uuid": "41469d32-ec35-5ebc-9622-67771169ccbb", 00:10:57.377 "is_configured": true, 00:10:57.377 "data_offset": 2048, 00:10:57.377 "data_size": 63488 00:10:57.377 }, 00:10:57.377 { 00:10:57.377 "name": "BaseBdev3", 00:10:57.377 "uuid": "24f16513-7825-5577-8c4c-9bc244977bdf", 00:10:57.377 "is_configured": true, 00:10:57.377 "data_offset": 2048, 00:10:57.377 "data_size": 63488 00:10:57.377 }, 00:10:57.377 { 00:10:57.377 "name": "BaseBdev4", 00:10:57.377 "uuid": "23dfe7b9-3e63-539a-8e09-a883fd48e98c", 00:10:57.377 "is_configured": true, 00:10:57.377 "data_offset": 2048, 00:10:57.377 "data_size": 63488 00:10:57.377 } 00:10:57.377 ] 00:10:57.377 }' 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.377 18:51:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.947 18:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:57.947 18:51:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:57.947 [2024-11-16 18:51:41.252481] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.889 "name": "raid_bdev1", 00:10:58.889 "uuid": "f35c90a1-5fb0-4cb2-b381-d9182b887793", 00:10:58.889 "strip_size_kb": 64, 00:10:58.889 "state": "online", 00:10:58.889 "raid_level": "concat", 00:10:58.889 "superblock": true, 00:10:58.889 "num_base_bdevs": 4, 00:10:58.889 "num_base_bdevs_discovered": 4, 00:10:58.889 "num_base_bdevs_operational": 4, 00:10:58.889 "base_bdevs_list": [ 00:10:58.889 { 00:10:58.889 "name": "BaseBdev1", 00:10:58.889 "uuid": "fcd29e3c-7290-5fc8-a211-0a557a3984d1", 00:10:58.889 "is_configured": true, 00:10:58.889 "data_offset": 2048, 00:10:58.889 "data_size": 63488 00:10:58.889 }, 00:10:58.889 { 00:10:58.889 "name": "BaseBdev2", 00:10:58.889 "uuid": "41469d32-ec35-5ebc-9622-67771169ccbb", 00:10:58.889 "is_configured": true, 00:10:58.889 "data_offset": 2048, 00:10:58.889 "data_size": 63488 00:10:58.889 }, 00:10:58.889 { 00:10:58.889 "name": "BaseBdev3", 00:10:58.889 "uuid": "24f16513-7825-5577-8c4c-9bc244977bdf", 00:10:58.889 "is_configured": true, 00:10:58.889 "data_offset": 2048, 00:10:58.889 "data_size": 63488 00:10:58.889 }, 00:10:58.889 { 00:10:58.889 "name": "BaseBdev4", 00:10:58.889 "uuid": "23dfe7b9-3e63-539a-8e09-a883fd48e98c", 00:10:58.889 "is_configured": true, 00:10:58.889 "data_offset": 2048, 00:10:58.889 "data_size": 63488 00:10:58.889 } 00:10:58.889 ] 00:10:58.889 }' 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.889 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.149 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.149 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.149 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.409 [2024-11-16 18:51:42.624676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.409 [2024-11-16 18:51:42.624717] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.409 [2024-11-16 18:51:42.627574] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.409 [2024-11-16 18:51:42.627638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.409 [2024-11-16 18:51:42.627696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.409 [2024-11-16 18:51:42.627712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:10:59.409 { 00:10:59.409 "results": [ 00:10:59.409 { 00:10:59.409 "job": "raid_bdev1", 00:10:59.409 "core_mask": "0x1", 00:10:59.409 "workload": "randrw", 00:10:59.409 "percentage": 50, 00:10:59.409 "status": "finished", 00:10:59.409 "queue_depth": 1, 00:10:59.409 "io_size": 131072, 00:10:59.409 "runtime": 1.372976, 00:10:59.409 "iops": 15612.071878896644, 00:10:59.409 "mibps": 1951.5089848620805, 00:10:59.409 "io_failed": 1, 00:10:59.409 "io_timeout": 0, 00:10:59.409 "avg_latency_us": 89.02795036876299, 00:10:59.409 "min_latency_us": 26.270742358078603, 00:10:59.409 "max_latency_us": 1631.2454148471616 00:10:59.409 } 00:10:59.409 ], 00:10:59.409 "core_count": 1 00:10:59.409 } 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72805 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72805 ']' 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72805 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72805 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.409 killing process with pid 72805 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72805' 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72805 00:10:59.409 [2024-11-16 18:51:42.661671] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.409 18:51:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72805 00:10:59.669 [2024-11-16 18:51:42.993329] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.m4jVONEkx5 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:01.051 00:11:01.051 real 0m4.661s 00:11:01.051 user 0m5.486s 00:11:01.051 sys 0m0.584s 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:01.051 18:51:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.051 ************************************ 00:11:01.051 END TEST raid_write_error_test 00:11:01.051 ************************************ 00:11:01.051 18:51:44 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:01.051 18:51:44 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:01.051 18:51:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:01.051 18:51:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:01.051 18:51:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:01.051 ************************************ 00:11:01.051 START TEST raid_state_function_test 00:11:01.051 ************************************ 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72951 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:01.051 Process raid pid: 72951 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72951' 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72951 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72951 ']' 00:11:01.051 18:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.052 18:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.052 18:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.052 18:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.052 18:51:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.052 [2024-11-16 18:51:44.330202] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:01.052 [2024-11-16 18:51:44.330701] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.052 [2024-11-16 18:51:44.507870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.311 [2024-11-16 18:51:44.619834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.571 [2024-11-16 18:51:44.813079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.571 [2024-11-16 18:51:44.813132] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 [2024-11-16 18:51:45.166142] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.830 [2024-11-16 18:51:45.166218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.830 [2024-11-16 18:51:45.166228] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.830 [2024-11-16 18:51:45.166238] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.830 [2024-11-16 18:51:45.166243] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.830 [2024-11-16 18:51:45.166252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.830 [2024-11-16 18:51:45.166258] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.830 [2024-11-16 18:51:45.166266] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.830 "name": "Existed_Raid", 00:11:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.830 "strip_size_kb": 0, 00:11:01.830 "state": "configuring", 00:11:01.830 "raid_level": "raid1", 00:11:01.830 "superblock": false, 00:11:01.830 "num_base_bdevs": 4, 00:11:01.830 "num_base_bdevs_discovered": 0, 00:11:01.830 "num_base_bdevs_operational": 4, 00:11:01.830 "base_bdevs_list": [ 00:11:01.830 { 00:11:01.830 "name": "BaseBdev1", 00:11:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.830 "is_configured": false, 00:11:01.830 "data_offset": 0, 00:11:01.830 "data_size": 0 00:11:01.830 }, 00:11:01.830 { 00:11:01.830 "name": "BaseBdev2", 00:11:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.830 "is_configured": false, 00:11:01.830 "data_offset": 0, 00:11:01.830 "data_size": 0 00:11:01.830 }, 00:11:01.830 { 00:11:01.830 "name": "BaseBdev3", 00:11:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.830 "is_configured": false, 00:11:01.830 "data_offset": 0, 00:11:01.830 "data_size": 0 00:11:01.830 }, 00:11:01.830 { 00:11:01.830 "name": "BaseBdev4", 00:11:01.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.830 "is_configured": false, 00:11:01.830 "data_offset": 0, 00:11:01.830 "data_size": 0 00:11:01.830 } 00:11:01.830 ] 00:11:01.830 }' 00:11:01.830 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.831 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.400 [2024-11-16 18:51:45.629325] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.400 [2024-11-16 18:51:45.629368] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.400 [2024-11-16 18:51:45.641288] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.400 [2024-11-16 18:51:45.641334] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.400 [2024-11-16 18:51:45.641343] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.400 [2024-11-16 18:51:45.641352] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.400 [2024-11-16 18:51:45.641357] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.400 [2024-11-16 18:51:45.641366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.400 [2024-11-16 18:51:45.641372] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.400 [2024-11-16 18:51:45.641380] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.400 [2024-11-16 18:51:45.688595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.400 BaseBdev1 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.400 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.400 [ 00:11:02.400 { 00:11:02.400 "name": "BaseBdev1", 00:11:02.400 "aliases": [ 00:11:02.400 "981dea9f-b21f-425e-b5d9-05295d26186e" 00:11:02.400 ], 00:11:02.400 "product_name": "Malloc disk", 00:11:02.400 "block_size": 512, 00:11:02.400 "num_blocks": 65536, 00:11:02.400 "uuid": "981dea9f-b21f-425e-b5d9-05295d26186e", 00:11:02.400 "assigned_rate_limits": { 00:11:02.400 "rw_ios_per_sec": 0, 00:11:02.400 "rw_mbytes_per_sec": 0, 00:11:02.400 "r_mbytes_per_sec": 0, 00:11:02.400 "w_mbytes_per_sec": 0 00:11:02.400 }, 00:11:02.400 "claimed": true, 00:11:02.400 "claim_type": "exclusive_write", 00:11:02.400 "zoned": false, 00:11:02.400 "supported_io_types": { 00:11:02.400 "read": true, 00:11:02.400 "write": true, 00:11:02.400 "unmap": true, 00:11:02.400 "flush": true, 00:11:02.400 "reset": true, 00:11:02.400 "nvme_admin": false, 00:11:02.400 "nvme_io": false, 00:11:02.400 "nvme_io_md": false, 00:11:02.400 "write_zeroes": true, 00:11:02.400 "zcopy": true, 00:11:02.400 "get_zone_info": false, 00:11:02.400 "zone_management": false, 00:11:02.400 "zone_append": false, 00:11:02.400 "compare": false, 00:11:02.400 "compare_and_write": false, 00:11:02.400 "abort": true, 00:11:02.400 "seek_hole": false, 00:11:02.400 "seek_data": false, 00:11:02.400 "copy": true, 00:11:02.401 "nvme_iov_md": false 00:11:02.401 }, 00:11:02.401 "memory_domains": [ 00:11:02.401 { 00:11:02.401 "dma_device_id": "system", 00:11:02.401 "dma_device_type": 1 00:11:02.401 }, 00:11:02.401 { 00:11:02.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.401 "dma_device_type": 2 00:11:02.401 } 00:11:02.401 ], 00:11:02.401 "driver_specific": {} 00:11:02.401 } 00:11:02.401 ] 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.401 "name": "Existed_Raid", 00:11:02.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.401 "strip_size_kb": 0, 00:11:02.401 "state": "configuring", 00:11:02.401 "raid_level": "raid1", 00:11:02.401 "superblock": false, 00:11:02.401 "num_base_bdevs": 4, 00:11:02.401 "num_base_bdevs_discovered": 1, 00:11:02.401 "num_base_bdevs_operational": 4, 00:11:02.401 "base_bdevs_list": [ 00:11:02.401 { 00:11:02.401 "name": "BaseBdev1", 00:11:02.401 "uuid": "981dea9f-b21f-425e-b5d9-05295d26186e", 00:11:02.401 "is_configured": true, 00:11:02.401 "data_offset": 0, 00:11:02.401 "data_size": 65536 00:11:02.401 }, 00:11:02.401 { 00:11:02.401 "name": "BaseBdev2", 00:11:02.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.401 "is_configured": false, 00:11:02.401 "data_offset": 0, 00:11:02.401 "data_size": 0 00:11:02.401 }, 00:11:02.401 { 00:11:02.401 "name": "BaseBdev3", 00:11:02.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.401 "is_configured": false, 00:11:02.401 "data_offset": 0, 00:11:02.401 "data_size": 0 00:11:02.401 }, 00:11:02.401 { 00:11:02.401 "name": "BaseBdev4", 00:11:02.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.401 "is_configured": false, 00:11:02.401 "data_offset": 0, 00:11:02.401 "data_size": 0 00:11:02.401 } 00:11:02.401 ] 00:11:02.401 }' 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.401 18:51:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.971 [2024-11-16 18:51:46.167850] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.971 [2024-11-16 18:51:46.167913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.971 [2024-11-16 18:51:46.179865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.971 [2024-11-16 18:51:46.181648] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.971 [2024-11-16 18:51:46.181703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.971 [2024-11-16 18:51:46.181712] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.971 [2024-11-16 18:51:46.181722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.971 [2024-11-16 18:51:46.181729] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.971 [2024-11-16 18:51:46.181737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.971 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.971 "name": "Existed_Raid", 00:11:02.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.971 "strip_size_kb": 0, 00:11:02.971 "state": "configuring", 00:11:02.971 "raid_level": "raid1", 00:11:02.971 "superblock": false, 00:11:02.971 "num_base_bdevs": 4, 00:11:02.971 "num_base_bdevs_discovered": 1, 00:11:02.971 "num_base_bdevs_operational": 4, 00:11:02.971 "base_bdevs_list": [ 00:11:02.971 { 00:11:02.971 "name": "BaseBdev1", 00:11:02.971 "uuid": "981dea9f-b21f-425e-b5d9-05295d26186e", 00:11:02.971 "is_configured": true, 00:11:02.971 "data_offset": 0, 00:11:02.971 "data_size": 65536 00:11:02.971 }, 00:11:02.971 { 00:11:02.971 "name": "BaseBdev2", 00:11:02.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.971 "is_configured": false, 00:11:02.971 "data_offset": 0, 00:11:02.971 "data_size": 0 00:11:02.971 }, 00:11:02.971 { 00:11:02.971 "name": "BaseBdev3", 00:11:02.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.971 "is_configured": false, 00:11:02.971 "data_offset": 0, 00:11:02.971 "data_size": 0 00:11:02.971 }, 00:11:02.971 { 00:11:02.971 "name": "BaseBdev4", 00:11:02.972 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.972 "is_configured": false, 00:11:02.972 "data_offset": 0, 00:11:02.972 "data_size": 0 00:11:02.972 } 00:11:02.972 ] 00:11:02.972 }' 00:11:02.972 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.972 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.231 BaseBdev2 00:11:03.231 [2024-11-16 18:51:46.644040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.231 [ 00:11:03.231 { 00:11:03.231 "name": "BaseBdev2", 00:11:03.231 "aliases": [ 00:11:03.231 "29d361c7-a7cf-4327-9102-3c66264dd88e" 00:11:03.231 ], 00:11:03.231 "product_name": "Malloc disk", 00:11:03.231 "block_size": 512, 00:11:03.231 "num_blocks": 65536, 00:11:03.231 "uuid": "29d361c7-a7cf-4327-9102-3c66264dd88e", 00:11:03.231 "assigned_rate_limits": { 00:11:03.231 "rw_ios_per_sec": 0, 00:11:03.231 "rw_mbytes_per_sec": 0, 00:11:03.231 "r_mbytes_per_sec": 0, 00:11:03.231 "w_mbytes_per_sec": 0 00:11:03.231 }, 00:11:03.231 "claimed": true, 00:11:03.231 "claim_type": "exclusive_write", 00:11:03.231 "zoned": false, 00:11:03.231 "supported_io_types": { 00:11:03.231 "read": true, 00:11:03.231 "write": true, 00:11:03.231 "unmap": true, 00:11:03.231 "flush": true, 00:11:03.231 "reset": true, 00:11:03.231 "nvme_admin": false, 00:11:03.231 "nvme_io": false, 00:11:03.231 "nvme_io_md": false, 00:11:03.231 "write_zeroes": true, 00:11:03.231 "zcopy": true, 00:11:03.231 "get_zone_info": false, 00:11:03.231 "zone_management": false, 00:11:03.231 "zone_append": false, 00:11:03.231 "compare": false, 00:11:03.231 "compare_and_write": false, 00:11:03.231 "abort": true, 00:11:03.231 "seek_hole": false, 00:11:03.231 "seek_data": false, 00:11:03.231 "copy": true, 00:11:03.231 "nvme_iov_md": false 00:11:03.231 }, 00:11:03.231 "memory_domains": [ 00:11:03.231 { 00:11:03.231 "dma_device_id": "system", 00:11:03.231 "dma_device_type": 1 00:11:03.231 }, 00:11:03.231 { 00:11:03.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.231 "dma_device_type": 2 00:11:03.231 } 00:11:03.231 ], 00:11:03.231 "driver_specific": {} 00:11:03.231 } 00:11:03.231 ] 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.231 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.491 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.491 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.491 "name": "Existed_Raid", 00:11:03.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.491 "strip_size_kb": 0, 00:11:03.491 "state": "configuring", 00:11:03.491 "raid_level": "raid1", 00:11:03.491 "superblock": false, 00:11:03.491 "num_base_bdevs": 4, 00:11:03.491 "num_base_bdevs_discovered": 2, 00:11:03.491 "num_base_bdevs_operational": 4, 00:11:03.491 "base_bdevs_list": [ 00:11:03.491 { 00:11:03.491 "name": "BaseBdev1", 00:11:03.491 "uuid": "981dea9f-b21f-425e-b5d9-05295d26186e", 00:11:03.491 "is_configured": true, 00:11:03.491 "data_offset": 0, 00:11:03.491 "data_size": 65536 00:11:03.491 }, 00:11:03.491 { 00:11:03.491 "name": "BaseBdev2", 00:11:03.491 "uuid": "29d361c7-a7cf-4327-9102-3c66264dd88e", 00:11:03.491 "is_configured": true, 00:11:03.491 "data_offset": 0, 00:11:03.491 "data_size": 65536 00:11:03.491 }, 00:11:03.491 { 00:11:03.491 "name": "BaseBdev3", 00:11:03.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.491 "is_configured": false, 00:11:03.491 "data_offset": 0, 00:11:03.491 "data_size": 0 00:11:03.491 }, 00:11:03.491 { 00:11:03.491 "name": "BaseBdev4", 00:11:03.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.491 "is_configured": false, 00:11:03.491 "data_offset": 0, 00:11:03.491 "data_size": 0 00:11:03.491 } 00:11:03.491 ] 00:11:03.491 }' 00:11:03.491 18:51:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.491 18:51:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.751 [2024-11-16 18:51:47.175630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.751 BaseBdev3 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.751 [ 00:11:03.751 { 00:11:03.751 "name": "BaseBdev3", 00:11:03.751 "aliases": [ 00:11:03.751 "7969cbdf-dbc7-494b-9b6d-dbf4f6318661" 00:11:03.751 ], 00:11:03.751 "product_name": "Malloc disk", 00:11:03.751 "block_size": 512, 00:11:03.751 "num_blocks": 65536, 00:11:03.751 "uuid": "7969cbdf-dbc7-494b-9b6d-dbf4f6318661", 00:11:03.751 "assigned_rate_limits": { 00:11:03.751 "rw_ios_per_sec": 0, 00:11:03.751 "rw_mbytes_per_sec": 0, 00:11:03.751 "r_mbytes_per_sec": 0, 00:11:03.751 "w_mbytes_per_sec": 0 00:11:03.751 }, 00:11:03.751 "claimed": true, 00:11:03.751 "claim_type": "exclusive_write", 00:11:03.751 "zoned": false, 00:11:03.751 "supported_io_types": { 00:11:03.751 "read": true, 00:11:03.751 "write": true, 00:11:03.751 "unmap": true, 00:11:03.751 "flush": true, 00:11:03.751 "reset": true, 00:11:03.751 "nvme_admin": false, 00:11:03.751 "nvme_io": false, 00:11:03.751 "nvme_io_md": false, 00:11:03.751 "write_zeroes": true, 00:11:03.751 "zcopy": true, 00:11:03.751 "get_zone_info": false, 00:11:03.751 "zone_management": false, 00:11:03.751 "zone_append": false, 00:11:03.751 "compare": false, 00:11:03.751 "compare_and_write": false, 00:11:03.751 "abort": true, 00:11:03.751 "seek_hole": false, 00:11:03.751 "seek_data": false, 00:11:03.751 "copy": true, 00:11:03.751 "nvme_iov_md": false 00:11:03.751 }, 00:11:03.751 "memory_domains": [ 00:11:03.751 { 00:11:03.751 "dma_device_id": "system", 00:11:03.751 "dma_device_type": 1 00:11:03.751 }, 00:11:03.751 { 00:11:03.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.751 "dma_device_type": 2 00:11:03.751 } 00:11:03.751 ], 00:11:03.751 "driver_specific": {} 00:11:03.751 } 00:11:03.751 ] 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.751 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.010 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.010 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.010 "name": "Existed_Raid", 00:11:04.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.010 "strip_size_kb": 0, 00:11:04.010 "state": "configuring", 00:11:04.010 "raid_level": "raid1", 00:11:04.010 "superblock": false, 00:11:04.010 "num_base_bdevs": 4, 00:11:04.010 "num_base_bdevs_discovered": 3, 00:11:04.010 "num_base_bdevs_operational": 4, 00:11:04.010 "base_bdevs_list": [ 00:11:04.010 { 00:11:04.010 "name": "BaseBdev1", 00:11:04.010 "uuid": "981dea9f-b21f-425e-b5d9-05295d26186e", 00:11:04.010 "is_configured": true, 00:11:04.010 "data_offset": 0, 00:11:04.010 "data_size": 65536 00:11:04.010 }, 00:11:04.010 { 00:11:04.010 "name": "BaseBdev2", 00:11:04.010 "uuid": "29d361c7-a7cf-4327-9102-3c66264dd88e", 00:11:04.010 "is_configured": true, 00:11:04.010 "data_offset": 0, 00:11:04.010 "data_size": 65536 00:11:04.010 }, 00:11:04.010 { 00:11:04.010 "name": "BaseBdev3", 00:11:04.010 "uuid": "7969cbdf-dbc7-494b-9b6d-dbf4f6318661", 00:11:04.010 "is_configured": true, 00:11:04.010 "data_offset": 0, 00:11:04.010 "data_size": 65536 00:11:04.010 }, 00:11:04.010 { 00:11:04.010 "name": "BaseBdev4", 00:11:04.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.010 "is_configured": false, 00:11:04.010 "data_offset": 0, 00:11:04.010 "data_size": 0 00:11:04.010 } 00:11:04.010 ] 00:11:04.010 }' 00:11:04.010 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.010 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.270 [2024-11-16 18:51:47.624031] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:04.270 [2024-11-16 18:51:47.624096] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:04.270 [2024-11-16 18:51:47.624105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:04.270 [2024-11-16 18:51:47.624384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:04.270 [2024-11-16 18:51:47.624562] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:04.270 [2024-11-16 18:51:47.624584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:04.270 [2024-11-16 18:51:47.624861] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.270 BaseBdev4 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.270 [ 00:11:04.270 { 00:11:04.270 "name": "BaseBdev4", 00:11:04.270 "aliases": [ 00:11:04.270 "77023883-8e9b-455f-bbe3-e787eba5da5b" 00:11:04.270 ], 00:11:04.270 "product_name": "Malloc disk", 00:11:04.270 "block_size": 512, 00:11:04.270 "num_blocks": 65536, 00:11:04.270 "uuid": "77023883-8e9b-455f-bbe3-e787eba5da5b", 00:11:04.270 "assigned_rate_limits": { 00:11:04.270 "rw_ios_per_sec": 0, 00:11:04.270 "rw_mbytes_per_sec": 0, 00:11:04.270 "r_mbytes_per_sec": 0, 00:11:04.270 "w_mbytes_per_sec": 0 00:11:04.270 }, 00:11:04.270 "claimed": true, 00:11:04.270 "claim_type": "exclusive_write", 00:11:04.270 "zoned": false, 00:11:04.270 "supported_io_types": { 00:11:04.270 "read": true, 00:11:04.270 "write": true, 00:11:04.270 "unmap": true, 00:11:04.270 "flush": true, 00:11:04.270 "reset": true, 00:11:04.270 "nvme_admin": false, 00:11:04.270 "nvme_io": false, 00:11:04.270 "nvme_io_md": false, 00:11:04.270 "write_zeroes": true, 00:11:04.270 "zcopy": true, 00:11:04.270 "get_zone_info": false, 00:11:04.270 "zone_management": false, 00:11:04.270 "zone_append": false, 00:11:04.270 "compare": false, 00:11:04.270 "compare_and_write": false, 00:11:04.270 "abort": true, 00:11:04.270 "seek_hole": false, 00:11:04.270 "seek_data": false, 00:11:04.270 "copy": true, 00:11:04.270 "nvme_iov_md": false 00:11:04.270 }, 00:11:04.270 "memory_domains": [ 00:11:04.270 { 00:11:04.270 "dma_device_id": "system", 00:11:04.270 "dma_device_type": 1 00:11:04.270 }, 00:11:04.270 { 00:11:04.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.270 "dma_device_type": 2 00:11:04.270 } 00:11:04.270 ], 00:11:04.270 "driver_specific": {} 00:11:04.270 } 00:11:04.270 ] 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.270 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.270 "name": "Existed_Raid", 00:11:04.270 "uuid": "460501a7-a641-42d0-b4d8-b259f26ba630", 00:11:04.270 "strip_size_kb": 0, 00:11:04.270 "state": "online", 00:11:04.270 "raid_level": "raid1", 00:11:04.270 "superblock": false, 00:11:04.270 "num_base_bdevs": 4, 00:11:04.270 "num_base_bdevs_discovered": 4, 00:11:04.270 "num_base_bdevs_operational": 4, 00:11:04.270 "base_bdevs_list": [ 00:11:04.270 { 00:11:04.270 "name": "BaseBdev1", 00:11:04.270 "uuid": "981dea9f-b21f-425e-b5d9-05295d26186e", 00:11:04.270 "is_configured": true, 00:11:04.270 "data_offset": 0, 00:11:04.270 "data_size": 65536 00:11:04.270 }, 00:11:04.270 { 00:11:04.270 "name": "BaseBdev2", 00:11:04.270 "uuid": "29d361c7-a7cf-4327-9102-3c66264dd88e", 00:11:04.270 "is_configured": true, 00:11:04.270 "data_offset": 0, 00:11:04.270 "data_size": 65536 00:11:04.270 }, 00:11:04.270 { 00:11:04.270 "name": "BaseBdev3", 00:11:04.270 "uuid": "7969cbdf-dbc7-494b-9b6d-dbf4f6318661", 00:11:04.270 "is_configured": true, 00:11:04.270 "data_offset": 0, 00:11:04.270 "data_size": 65536 00:11:04.270 }, 00:11:04.271 { 00:11:04.271 "name": "BaseBdev4", 00:11:04.271 "uuid": "77023883-8e9b-455f-bbe3-e787eba5da5b", 00:11:04.271 "is_configured": true, 00:11:04.271 "data_offset": 0, 00:11:04.271 "data_size": 65536 00:11:04.271 } 00:11:04.271 ] 00:11:04.271 }' 00:11:04.271 18:51:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.271 18:51:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.840 [2024-11-16 18:51:48.091615] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.840 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.840 "name": "Existed_Raid", 00:11:04.840 "aliases": [ 00:11:04.840 "460501a7-a641-42d0-b4d8-b259f26ba630" 00:11:04.840 ], 00:11:04.840 "product_name": "Raid Volume", 00:11:04.840 "block_size": 512, 00:11:04.840 "num_blocks": 65536, 00:11:04.841 "uuid": "460501a7-a641-42d0-b4d8-b259f26ba630", 00:11:04.841 "assigned_rate_limits": { 00:11:04.841 "rw_ios_per_sec": 0, 00:11:04.841 "rw_mbytes_per_sec": 0, 00:11:04.841 "r_mbytes_per_sec": 0, 00:11:04.841 "w_mbytes_per_sec": 0 00:11:04.841 }, 00:11:04.841 "claimed": false, 00:11:04.841 "zoned": false, 00:11:04.841 "supported_io_types": { 00:11:04.841 "read": true, 00:11:04.841 "write": true, 00:11:04.841 "unmap": false, 00:11:04.841 "flush": false, 00:11:04.841 "reset": true, 00:11:04.841 "nvme_admin": false, 00:11:04.841 "nvme_io": false, 00:11:04.841 "nvme_io_md": false, 00:11:04.841 "write_zeroes": true, 00:11:04.841 "zcopy": false, 00:11:04.841 "get_zone_info": false, 00:11:04.841 "zone_management": false, 00:11:04.841 "zone_append": false, 00:11:04.841 "compare": false, 00:11:04.841 "compare_and_write": false, 00:11:04.841 "abort": false, 00:11:04.841 "seek_hole": false, 00:11:04.841 "seek_data": false, 00:11:04.841 "copy": false, 00:11:04.841 "nvme_iov_md": false 00:11:04.841 }, 00:11:04.841 "memory_domains": [ 00:11:04.841 { 00:11:04.841 "dma_device_id": "system", 00:11:04.841 "dma_device_type": 1 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.841 "dma_device_type": 2 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "dma_device_id": "system", 00:11:04.841 "dma_device_type": 1 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.841 "dma_device_type": 2 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "dma_device_id": "system", 00:11:04.841 "dma_device_type": 1 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.841 "dma_device_type": 2 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "dma_device_id": "system", 00:11:04.841 "dma_device_type": 1 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.841 "dma_device_type": 2 00:11:04.841 } 00:11:04.841 ], 00:11:04.841 "driver_specific": { 00:11:04.841 "raid": { 00:11:04.841 "uuid": "460501a7-a641-42d0-b4d8-b259f26ba630", 00:11:04.841 "strip_size_kb": 0, 00:11:04.841 "state": "online", 00:11:04.841 "raid_level": "raid1", 00:11:04.841 "superblock": false, 00:11:04.841 "num_base_bdevs": 4, 00:11:04.841 "num_base_bdevs_discovered": 4, 00:11:04.841 "num_base_bdevs_operational": 4, 00:11:04.841 "base_bdevs_list": [ 00:11:04.841 { 00:11:04.841 "name": "BaseBdev1", 00:11:04.841 "uuid": "981dea9f-b21f-425e-b5d9-05295d26186e", 00:11:04.841 "is_configured": true, 00:11:04.841 "data_offset": 0, 00:11:04.841 "data_size": 65536 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "name": "BaseBdev2", 00:11:04.841 "uuid": "29d361c7-a7cf-4327-9102-3c66264dd88e", 00:11:04.841 "is_configured": true, 00:11:04.841 "data_offset": 0, 00:11:04.841 "data_size": 65536 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "name": "BaseBdev3", 00:11:04.841 "uuid": "7969cbdf-dbc7-494b-9b6d-dbf4f6318661", 00:11:04.841 "is_configured": true, 00:11:04.841 "data_offset": 0, 00:11:04.841 "data_size": 65536 00:11:04.841 }, 00:11:04.841 { 00:11:04.841 "name": "BaseBdev4", 00:11:04.841 "uuid": "77023883-8e9b-455f-bbe3-e787eba5da5b", 00:11:04.841 "is_configured": true, 00:11:04.841 "data_offset": 0, 00:11:04.841 "data_size": 65536 00:11:04.841 } 00:11:04.841 ] 00:11:04.841 } 00:11:04.841 } 00:11:04.841 }' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:04.841 BaseBdev2 00:11:04.841 BaseBdev3 00:11:04.841 BaseBdev4' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.841 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.108 [2024-11-16 18:51:48.334920] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.108 "name": "Existed_Raid", 00:11:05.108 "uuid": "460501a7-a641-42d0-b4d8-b259f26ba630", 00:11:05.108 "strip_size_kb": 0, 00:11:05.108 "state": "online", 00:11:05.108 "raid_level": "raid1", 00:11:05.108 "superblock": false, 00:11:05.108 "num_base_bdevs": 4, 00:11:05.108 "num_base_bdevs_discovered": 3, 00:11:05.108 "num_base_bdevs_operational": 3, 00:11:05.108 "base_bdevs_list": [ 00:11:05.108 { 00:11:05.108 "name": null, 00:11:05.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.108 "is_configured": false, 00:11:05.108 "data_offset": 0, 00:11:05.108 "data_size": 65536 00:11:05.108 }, 00:11:05.108 { 00:11:05.108 "name": "BaseBdev2", 00:11:05.108 "uuid": "29d361c7-a7cf-4327-9102-3c66264dd88e", 00:11:05.108 "is_configured": true, 00:11:05.108 "data_offset": 0, 00:11:05.108 "data_size": 65536 00:11:05.108 }, 00:11:05.108 { 00:11:05.108 "name": "BaseBdev3", 00:11:05.108 "uuid": "7969cbdf-dbc7-494b-9b6d-dbf4f6318661", 00:11:05.108 "is_configured": true, 00:11:05.108 "data_offset": 0, 00:11:05.108 "data_size": 65536 00:11:05.108 }, 00:11:05.108 { 00:11:05.108 "name": "BaseBdev4", 00:11:05.108 "uuid": "77023883-8e9b-455f-bbe3-e787eba5da5b", 00:11:05.108 "is_configured": true, 00:11:05.108 "data_offset": 0, 00:11:05.108 "data_size": 65536 00:11:05.108 } 00:11:05.108 ] 00:11:05.108 }' 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.108 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.701 18:51:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.701 [2024-11-16 18:51:48.918808] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.701 [2024-11-16 18:51:49.071481] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.701 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.961 [2024-11-16 18:51:49.213071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:05.961 [2024-11-16 18:51:49.213176] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:05.961 [2024-11-16 18:51:49.306682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:05.961 [2024-11-16 18:51:49.306741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:05.961 [2024-11-16 18:51:49.306753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.961 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.962 BaseBdev2 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.962 [ 00:11:05.962 { 00:11:05.962 "name": "BaseBdev2", 00:11:05.962 "aliases": [ 00:11:05.962 "173759ca-7a7a-4e65-9bc3-321badff1d99" 00:11:05.962 ], 00:11:05.962 "product_name": "Malloc disk", 00:11:05.962 "block_size": 512, 00:11:05.962 "num_blocks": 65536, 00:11:05.962 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:05.962 "assigned_rate_limits": { 00:11:05.962 "rw_ios_per_sec": 0, 00:11:05.962 "rw_mbytes_per_sec": 0, 00:11:05.962 "r_mbytes_per_sec": 0, 00:11:05.962 "w_mbytes_per_sec": 0 00:11:05.962 }, 00:11:05.962 "claimed": false, 00:11:05.962 "zoned": false, 00:11:05.962 "supported_io_types": { 00:11:05.962 "read": true, 00:11:05.962 "write": true, 00:11:05.962 "unmap": true, 00:11:05.962 "flush": true, 00:11:05.962 "reset": true, 00:11:05.962 "nvme_admin": false, 00:11:05.962 "nvme_io": false, 00:11:05.962 "nvme_io_md": false, 00:11:05.962 "write_zeroes": true, 00:11:05.962 "zcopy": true, 00:11:05.962 "get_zone_info": false, 00:11:05.962 "zone_management": false, 00:11:05.962 "zone_append": false, 00:11:05.962 "compare": false, 00:11:05.962 "compare_and_write": false, 00:11:05.962 "abort": true, 00:11:05.962 "seek_hole": false, 00:11:05.962 "seek_data": false, 00:11:05.962 "copy": true, 00:11:05.962 "nvme_iov_md": false 00:11:05.962 }, 00:11:05.962 "memory_domains": [ 00:11:05.962 { 00:11:05.962 "dma_device_id": "system", 00:11:05.962 "dma_device_type": 1 00:11:05.962 }, 00:11:05.962 { 00:11:05.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.962 "dma_device_type": 2 00:11:05.962 } 00:11:05.962 ], 00:11:05.962 "driver_specific": {} 00:11:05.962 } 00:11:05.962 ] 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.962 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.222 BaseBdev3 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.222 [ 00:11:06.222 { 00:11:06.222 "name": "BaseBdev3", 00:11:06.222 "aliases": [ 00:11:06.222 "9025d166-256b-43c9-9936-757c5644d564" 00:11:06.222 ], 00:11:06.222 "product_name": "Malloc disk", 00:11:06.222 "block_size": 512, 00:11:06.222 "num_blocks": 65536, 00:11:06.222 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:06.222 "assigned_rate_limits": { 00:11:06.222 "rw_ios_per_sec": 0, 00:11:06.222 "rw_mbytes_per_sec": 0, 00:11:06.222 "r_mbytes_per_sec": 0, 00:11:06.222 "w_mbytes_per_sec": 0 00:11:06.222 }, 00:11:06.222 "claimed": false, 00:11:06.222 "zoned": false, 00:11:06.222 "supported_io_types": { 00:11:06.222 "read": true, 00:11:06.222 "write": true, 00:11:06.222 "unmap": true, 00:11:06.222 "flush": true, 00:11:06.222 "reset": true, 00:11:06.222 "nvme_admin": false, 00:11:06.222 "nvme_io": false, 00:11:06.222 "nvme_io_md": false, 00:11:06.222 "write_zeroes": true, 00:11:06.222 "zcopy": true, 00:11:06.222 "get_zone_info": false, 00:11:06.222 "zone_management": false, 00:11:06.222 "zone_append": false, 00:11:06.222 "compare": false, 00:11:06.222 "compare_and_write": false, 00:11:06.222 "abort": true, 00:11:06.222 "seek_hole": false, 00:11:06.222 "seek_data": false, 00:11:06.222 "copy": true, 00:11:06.222 "nvme_iov_md": false 00:11:06.222 }, 00:11:06.222 "memory_domains": [ 00:11:06.222 { 00:11:06.222 "dma_device_id": "system", 00:11:06.222 "dma_device_type": 1 00:11:06.222 }, 00:11:06.222 { 00:11:06.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.222 "dma_device_type": 2 00:11:06.222 } 00:11:06.222 ], 00:11:06.222 "driver_specific": {} 00:11:06.222 } 00:11:06.222 ] 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.222 BaseBdev4 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.222 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.222 [ 00:11:06.222 { 00:11:06.222 "name": "BaseBdev4", 00:11:06.222 "aliases": [ 00:11:06.222 "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec" 00:11:06.222 ], 00:11:06.222 "product_name": "Malloc disk", 00:11:06.222 "block_size": 512, 00:11:06.222 "num_blocks": 65536, 00:11:06.222 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:06.222 "assigned_rate_limits": { 00:11:06.223 "rw_ios_per_sec": 0, 00:11:06.223 "rw_mbytes_per_sec": 0, 00:11:06.223 "r_mbytes_per_sec": 0, 00:11:06.223 "w_mbytes_per_sec": 0 00:11:06.223 }, 00:11:06.223 "claimed": false, 00:11:06.223 "zoned": false, 00:11:06.223 "supported_io_types": { 00:11:06.223 "read": true, 00:11:06.223 "write": true, 00:11:06.223 "unmap": true, 00:11:06.223 "flush": true, 00:11:06.223 "reset": true, 00:11:06.223 "nvme_admin": false, 00:11:06.223 "nvme_io": false, 00:11:06.223 "nvme_io_md": false, 00:11:06.223 "write_zeroes": true, 00:11:06.223 "zcopy": true, 00:11:06.223 "get_zone_info": false, 00:11:06.223 "zone_management": false, 00:11:06.223 "zone_append": false, 00:11:06.223 "compare": false, 00:11:06.223 "compare_and_write": false, 00:11:06.223 "abort": true, 00:11:06.223 "seek_hole": false, 00:11:06.223 "seek_data": false, 00:11:06.223 "copy": true, 00:11:06.223 "nvme_iov_md": false 00:11:06.223 }, 00:11:06.223 "memory_domains": [ 00:11:06.223 { 00:11:06.223 "dma_device_id": "system", 00:11:06.223 "dma_device_type": 1 00:11:06.223 }, 00:11:06.223 { 00:11:06.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.223 "dma_device_type": 2 00:11:06.223 } 00:11:06.223 ], 00:11:06.223 "driver_specific": {} 00:11:06.223 } 00:11:06.223 ] 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.223 [2024-11-16 18:51:49.583089] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:06.223 [2024-11-16 18:51:49.583140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:06.223 [2024-11-16 18:51:49.583161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.223 [2024-11-16 18:51:49.585046] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.223 [2024-11-16 18:51:49.585097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.223 "name": "Existed_Raid", 00:11:06.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.223 "strip_size_kb": 0, 00:11:06.223 "state": "configuring", 00:11:06.223 "raid_level": "raid1", 00:11:06.223 "superblock": false, 00:11:06.223 "num_base_bdevs": 4, 00:11:06.223 "num_base_bdevs_discovered": 3, 00:11:06.223 "num_base_bdevs_operational": 4, 00:11:06.223 "base_bdevs_list": [ 00:11:06.223 { 00:11:06.223 "name": "BaseBdev1", 00:11:06.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.223 "is_configured": false, 00:11:06.223 "data_offset": 0, 00:11:06.223 "data_size": 0 00:11:06.223 }, 00:11:06.223 { 00:11:06.223 "name": "BaseBdev2", 00:11:06.223 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:06.223 "is_configured": true, 00:11:06.223 "data_offset": 0, 00:11:06.223 "data_size": 65536 00:11:06.223 }, 00:11:06.223 { 00:11:06.223 "name": "BaseBdev3", 00:11:06.223 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:06.223 "is_configured": true, 00:11:06.223 "data_offset": 0, 00:11:06.223 "data_size": 65536 00:11:06.223 }, 00:11:06.223 { 00:11:06.223 "name": "BaseBdev4", 00:11:06.223 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:06.223 "is_configured": true, 00:11:06.223 "data_offset": 0, 00:11:06.223 "data_size": 65536 00:11:06.223 } 00:11:06.223 ] 00:11:06.223 }' 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.223 18:51:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.793 [2024-11-16 18:51:50.030354] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.793 "name": "Existed_Raid", 00:11:06.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.793 "strip_size_kb": 0, 00:11:06.793 "state": "configuring", 00:11:06.793 "raid_level": "raid1", 00:11:06.793 "superblock": false, 00:11:06.793 "num_base_bdevs": 4, 00:11:06.793 "num_base_bdevs_discovered": 2, 00:11:06.793 "num_base_bdevs_operational": 4, 00:11:06.793 "base_bdevs_list": [ 00:11:06.793 { 00:11:06.793 "name": "BaseBdev1", 00:11:06.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.793 "is_configured": false, 00:11:06.793 "data_offset": 0, 00:11:06.793 "data_size": 0 00:11:06.793 }, 00:11:06.793 { 00:11:06.793 "name": null, 00:11:06.793 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:06.793 "is_configured": false, 00:11:06.793 "data_offset": 0, 00:11:06.793 "data_size": 65536 00:11:06.793 }, 00:11:06.793 { 00:11:06.793 "name": "BaseBdev3", 00:11:06.793 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:06.793 "is_configured": true, 00:11:06.793 "data_offset": 0, 00:11:06.793 "data_size": 65536 00:11:06.793 }, 00:11:06.793 { 00:11:06.793 "name": "BaseBdev4", 00:11:06.793 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:06.793 "is_configured": true, 00:11:06.793 "data_offset": 0, 00:11:06.793 "data_size": 65536 00:11:06.793 } 00:11:06.793 ] 00:11:06.793 }' 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.793 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.053 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.053 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.053 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.053 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:07.053 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.053 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:07.053 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:07.053 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.053 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.313 [2024-11-16 18:51:50.525150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:07.313 BaseBdev1 00:11:07.313 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.313 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:07.313 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:07.313 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:07.313 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.314 [ 00:11:07.314 { 00:11:07.314 "name": "BaseBdev1", 00:11:07.314 "aliases": [ 00:11:07.314 "d4fc51a6-757a-474a-9d36-d829e83d7fb8" 00:11:07.314 ], 00:11:07.314 "product_name": "Malloc disk", 00:11:07.314 "block_size": 512, 00:11:07.314 "num_blocks": 65536, 00:11:07.314 "uuid": "d4fc51a6-757a-474a-9d36-d829e83d7fb8", 00:11:07.314 "assigned_rate_limits": { 00:11:07.314 "rw_ios_per_sec": 0, 00:11:07.314 "rw_mbytes_per_sec": 0, 00:11:07.314 "r_mbytes_per_sec": 0, 00:11:07.314 "w_mbytes_per_sec": 0 00:11:07.314 }, 00:11:07.314 "claimed": true, 00:11:07.314 "claim_type": "exclusive_write", 00:11:07.314 "zoned": false, 00:11:07.314 "supported_io_types": { 00:11:07.314 "read": true, 00:11:07.314 "write": true, 00:11:07.314 "unmap": true, 00:11:07.314 "flush": true, 00:11:07.314 "reset": true, 00:11:07.314 "nvme_admin": false, 00:11:07.314 "nvme_io": false, 00:11:07.314 "nvme_io_md": false, 00:11:07.314 "write_zeroes": true, 00:11:07.314 "zcopy": true, 00:11:07.314 "get_zone_info": false, 00:11:07.314 "zone_management": false, 00:11:07.314 "zone_append": false, 00:11:07.314 "compare": false, 00:11:07.314 "compare_and_write": false, 00:11:07.314 "abort": true, 00:11:07.314 "seek_hole": false, 00:11:07.314 "seek_data": false, 00:11:07.314 "copy": true, 00:11:07.314 "nvme_iov_md": false 00:11:07.314 }, 00:11:07.314 "memory_domains": [ 00:11:07.314 { 00:11:07.314 "dma_device_id": "system", 00:11:07.314 "dma_device_type": 1 00:11:07.314 }, 00:11:07.314 { 00:11:07.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.314 "dma_device_type": 2 00:11:07.314 } 00:11:07.314 ], 00:11:07.314 "driver_specific": {} 00:11:07.314 } 00:11:07.314 ] 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.314 "name": "Existed_Raid", 00:11:07.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.314 "strip_size_kb": 0, 00:11:07.314 "state": "configuring", 00:11:07.314 "raid_level": "raid1", 00:11:07.314 "superblock": false, 00:11:07.314 "num_base_bdevs": 4, 00:11:07.314 "num_base_bdevs_discovered": 3, 00:11:07.314 "num_base_bdevs_operational": 4, 00:11:07.314 "base_bdevs_list": [ 00:11:07.314 { 00:11:07.314 "name": "BaseBdev1", 00:11:07.314 "uuid": "d4fc51a6-757a-474a-9d36-d829e83d7fb8", 00:11:07.314 "is_configured": true, 00:11:07.314 "data_offset": 0, 00:11:07.314 "data_size": 65536 00:11:07.314 }, 00:11:07.314 { 00:11:07.314 "name": null, 00:11:07.314 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:07.314 "is_configured": false, 00:11:07.314 "data_offset": 0, 00:11:07.314 "data_size": 65536 00:11:07.314 }, 00:11:07.314 { 00:11:07.314 "name": "BaseBdev3", 00:11:07.314 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:07.314 "is_configured": true, 00:11:07.314 "data_offset": 0, 00:11:07.314 "data_size": 65536 00:11:07.314 }, 00:11:07.314 { 00:11:07.314 "name": "BaseBdev4", 00:11:07.314 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:07.314 "is_configured": true, 00:11:07.314 "data_offset": 0, 00:11:07.314 "data_size": 65536 00:11:07.314 } 00:11:07.314 ] 00:11:07.314 }' 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.314 18:51:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.574 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.574 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.574 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.574 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.574 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.835 [2024-11-16 18:51:51.056358] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.835 "name": "Existed_Raid", 00:11:07.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.835 "strip_size_kb": 0, 00:11:07.835 "state": "configuring", 00:11:07.835 "raid_level": "raid1", 00:11:07.835 "superblock": false, 00:11:07.835 "num_base_bdevs": 4, 00:11:07.835 "num_base_bdevs_discovered": 2, 00:11:07.835 "num_base_bdevs_operational": 4, 00:11:07.835 "base_bdevs_list": [ 00:11:07.835 { 00:11:07.835 "name": "BaseBdev1", 00:11:07.835 "uuid": "d4fc51a6-757a-474a-9d36-d829e83d7fb8", 00:11:07.835 "is_configured": true, 00:11:07.835 "data_offset": 0, 00:11:07.835 "data_size": 65536 00:11:07.835 }, 00:11:07.835 { 00:11:07.835 "name": null, 00:11:07.835 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:07.835 "is_configured": false, 00:11:07.835 "data_offset": 0, 00:11:07.835 "data_size": 65536 00:11:07.835 }, 00:11:07.835 { 00:11:07.835 "name": null, 00:11:07.835 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:07.835 "is_configured": false, 00:11:07.835 "data_offset": 0, 00:11:07.835 "data_size": 65536 00:11:07.835 }, 00:11:07.835 { 00:11:07.835 "name": "BaseBdev4", 00:11:07.835 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:07.835 "is_configured": true, 00:11:07.835 "data_offset": 0, 00:11:07.835 "data_size": 65536 00:11:07.835 } 00:11:07.835 ] 00:11:07.835 }' 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.835 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.094 [2024-11-16 18:51:51.503588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.094 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.094 "name": "Existed_Raid", 00:11:08.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.094 "strip_size_kb": 0, 00:11:08.094 "state": "configuring", 00:11:08.094 "raid_level": "raid1", 00:11:08.094 "superblock": false, 00:11:08.094 "num_base_bdevs": 4, 00:11:08.094 "num_base_bdevs_discovered": 3, 00:11:08.094 "num_base_bdevs_operational": 4, 00:11:08.094 "base_bdevs_list": [ 00:11:08.094 { 00:11:08.094 "name": "BaseBdev1", 00:11:08.094 "uuid": "d4fc51a6-757a-474a-9d36-d829e83d7fb8", 00:11:08.094 "is_configured": true, 00:11:08.094 "data_offset": 0, 00:11:08.094 "data_size": 65536 00:11:08.094 }, 00:11:08.094 { 00:11:08.094 "name": null, 00:11:08.094 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:08.094 "is_configured": false, 00:11:08.094 "data_offset": 0, 00:11:08.094 "data_size": 65536 00:11:08.094 }, 00:11:08.094 { 00:11:08.094 "name": "BaseBdev3", 00:11:08.094 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:08.094 "is_configured": true, 00:11:08.094 "data_offset": 0, 00:11:08.094 "data_size": 65536 00:11:08.094 }, 00:11:08.094 { 00:11:08.094 "name": "BaseBdev4", 00:11:08.095 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:08.095 "is_configured": true, 00:11:08.095 "data_offset": 0, 00:11:08.095 "data_size": 65536 00:11:08.095 } 00:11:08.095 ] 00:11:08.095 }' 00:11:08.095 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.095 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.662 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.662 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.662 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.662 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.662 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.662 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:08.662 18:51:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.662 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.662 18:51:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.662 [2024-11-16 18:51:51.962830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.662 "name": "Existed_Raid", 00:11:08.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.662 "strip_size_kb": 0, 00:11:08.662 "state": "configuring", 00:11:08.662 "raid_level": "raid1", 00:11:08.662 "superblock": false, 00:11:08.662 "num_base_bdevs": 4, 00:11:08.662 "num_base_bdevs_discovered": 2, 00:11:08.662 "num_base_bdevs_operational": 4, 00:11:08.662 "base_bdevs_list": [ 00:11:08.662 { 00:11:08.662 "name": null, 00:11:08.662 "uuid": "d4fc51a6-757a-474a-9d36-d829e83d7fb8", 00:11:08.662 "is_configured": false, 00:11:08.662 "data_offset": 0, 00:11:08.662 "data_size": 65536 00:11:08.662 }, 00:11:08.662 { 00:11:08.662 "name": null, 00:11:08.662 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:08.662 "is_configured": false, 00:11:08.662 "data_offset": 0, 00:11:08.662 "data_size": 65536 00:11:08.662 }, 00:11:08.662 { 00:11:08.662 "name": "BaseBdev3", 00:11:08.662 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:08.662 "is_configured": true, 00:11:08.662 "data_offset": 0, 00:11:08.662 "data_size": 65536 00:11:08.662 }, 00:11:08.662 { 00:11:08.662 "name": "BaseBdev4", 00:11:08.662 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:08.662 "is_configured": true, 00:11:08.662 "data_offset": 0, 00:11:08.662 "data_size": 65536 00:11:08.662 } 00:11:08.662 ] 00:11:08.662 }' 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.662 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.230 [2024-11-16 18:51:52.522101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.230 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.230 "name": "Existed_Raid", 00:11:09.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:09.230 "strip_size_kb": 0, 00:11:09.230 "state": "configuring", 00:11:09.230 "raid_level": "raid1", 00:11:09.230 "superblock": false, 00:11:09.230 "num_base_bdevs": 4, 00:11:09.230 "num_base_bdevs_discovered": 3, 00:11:09.230 "num_base_bdevs_operational": 4, 00:11:09.230 "base_bdevs_list": [ 00:11:09.230 { 00:11:09.230 "name": null, 00:11:09.230 "uuid": "d4fc51a6-757a-474a-9d36-d829e83d7fb8", 00:11:09.230 "is_configured": false, 00:11:09.230 "data_offset": 0, 00:11:09.230 "data_size": 65536 00:11:09.230 }, 00:11:09.230 { 00:11:09.230 "name": "BaseBdev2", 00:11:09.230 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:09.230 "is_configured": true, 00:11:09.230 "data_offset": 0, 00:11:09.230 "data_size": 65536 00:11:09.230 }, 00:11:09.230 { 00:11:09.230 "name": "BaseBdev3", 00:11:09.230 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:09.230 "is_configured": true, 00:11:09.230 "data_offset": 0, 00:11:09.230 "data_size": 65536 00:11:09.230 }, 00:11:09.230 { 00:11:09.230 "name": "BaseBdev4", 00:11:09.231 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:09.231 "is_configured": true, 00:11:09.231 "data_offset": 0, 00:11:09.231 "data_size": 65536 00:11:09.231 } 00:11:09.231 ] 00:11:09.231 }' 00:11:09.231 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.231 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.489 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.748 18:51:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u d4fc51a6-757a-474a-9d36-d829e83d7fb8 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.748 [2024-11-16 18:51:53.041074] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:09.748 [2024-11-16 18:51:53.041200] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:09.748 [2024-11-16 18:51:53.041217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:09.748 [2024-11-16 18:51:53.041507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:09.748 [2024-11-16 18:51:53.041662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:09.748 [2024-11-16 18:51:53.041694] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:09.748 [2024-11-16 18:51:53.041923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.748 NewBaseBdev 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.748 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.749 [ 00:11:09.749 { 00:11:09.749 "name": "NewBaseBdev", 00:11:09.749 "aliases": [ 00:11:09.749 "d4fc51a6-757a-474a-9d36-d829e83d7fb8" 00:11:09.749 ], 00:11:09.749 "product_name": "Malloc disk", 00:11:09.749 "block_size": 512, 00:11:09.749 "num_blocks": 65536, 00:11:09.749 "uuid": "d4fc51a6-757a-474a-9d36-d829e83d7fb8", 00:11:09.749 "assigned_rate_limits": { 00:11:09.749 "rw_ios_per_sec": 0, 00:11:09.749 "rw_mbytes_per_sec": 0, 00:11:09.749 "r_mbytes_per_sec": 0, 00:11:09.749 "w_mbytes_per_sec": 0 00:11:09.749 }, 00:11:09.749 "claimed": true, 00:11:09.749 "claim_type": "exclusive_write", 00:11:09.749 "zoned": false, 00:11:09.749 "supported_io_types": { 00:11:09.749 "read": true, 00:11:09.749 "write": true, 00:11:09.749 "unmap": true, 00:11:09.749 "flush": true, 00:11:09.749 "reset": true, 00:11:09.749 "nvme_admin": false, 00:11:09.749 "nvme_io": false, 00:11:09.749 "nvme_io_md": false, 00:11:09.749 "write_zeroes": true, 00:11:09.749 "zcopy": true, 00:11:09.749 "get_zone_info": false, 00:11:09.749 "zone_management": false, 00:11:09.749 "zone_append": false, 00:11:09.749 "compare": false, 00:11:09.749 "compare_and_write": false, 00:11:09.749 "abort": true, 00:11:09.749 "seek_hole": false, 00:11:09.749 "seek_data": false, 00:11:09.749 "copy": true, 00:11:09.749 "nvme_iov_md": false 00:11:09.749 }, 00:11:09.749 "memory_domains": [ 00:11:09.749 { 00:11:09.749 "dma_device_id": "system", 00:11:09.749 "dma_device_type": 1 00:11:09.749 }, 00:11:09.749 { 00:11:09.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.749 "dma_device_type": 2 00:11:09.749 } 00:11:09.749 ], 00:11:09.749 "driver_specific": {} 00:11:09.749 } 00:11:09.749 ] 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.749 "name": "Existed_Raid", 00:11:09.749 "uuid": "922f3ed0-3e60-4233-a822-bfe82bf2ce7e", 00:11:09.749 "strip_size_kb": 0, 00:11:09.749 "state": "online", 00:11:09.749 "raid_level": "raid1", 00:11:09.749 "superblock": false, 00:11:09.749 "num_base_bdevs": 4, 00:11:09.749 "num_base_bdevs_discovered": 4, 00:11:09.749 "num_base_bdevs_operational": 4, 00:11:09.749 "base_bdevs_list": [ 00:11:09.749 { 00:11:09.749 "name": "NewBaseBdev", 00:11:09.749 "uuid": "d4fc51a6-757a-474a-9d36-d829e83d7fb8", 00:11:09.749 "is_configured": true, 00:11:09.749 "data_offset": 0, 00:11:09.749 "data_size": 65536 00:11:09.749 }, 00:11:09.749 { 00:11:09.749 "name": "BaseBdev2", 00:11:09.749 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:09.749 "is_configured": true, 00:11:09.749 "data_offset": 0, 00:11:09.749 "data_size": 65536 00:11:09.749 }, 00:11:09.749 { 00:11:09.749 "name": "BaseBdev3", 00:11:09.749 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:09.749 "is_configured": true, 00:11:09.749 "data_offset": 0, 00:11:09.749 "data_size": 65536 00:11:09.749 }, 00:11:09.749 { 00:11:09.749 "name": "BaseBdev4", 00:11:09.749 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:09.749 "is_configured": true, 00:11:09.749 "data_offset": 0, 00:11:09.749 "data_size": 65536 00:11:09.749 } 00:11:09.749 ] 00:11:09.749 }' 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.749 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.318 [2024-11-16 18:51:53.508697] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.318 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:10.318 "name": "Existed_Raid", 00:11:10.318 "aliases": [ 00:11:10.318 "922f3ed0-3e60-4233-a822-bfe82bf2ce7e" 00:11:10.318 ], 00:11:10.318 "product_name": "Raid Volume", 00:11:10.318 "block_size": 512, 00:11:10.318 "num_blocks": 65536, 00:11:10.318 "uuid": "922f3ed0-3e60-4233-a822-bfe82bf2ce7e", 00:11:10.318 "assigned_rate_limits": { 00:11:10.318 "rw_ios_per_sec": 0, 00:11:10.318 "rw_mbytes_per_sec": 0, 00:11:10.318 "r_mbytes_per_sec": 0, 00:11:10.318 "w_mbytes_per_sec": 0 00:11:10.318 }, 00:11:10.318 "claimed": false, 00:11:10.318 "zoned": false, 00:11:10.318 "supported_io_types": { 00:11:10.318 "read": true, 00:11:10.318 "write": true, 00:11:10.318 "unmap": false, 00:11:10.318 "flush": false, 00:11:10.318 "reset": true, 00:11:10.318 "nvme_admin": false, 00:11:10.318 "nvme_io": false, 00:11:10.318 "nvme_io_md": false, 00:11:10.318 "write_zeroes": true, 00:11:10.318 "zcopy": false, 00:11:10.318 "get_zone_info": false, 00:11:10.318 "zone_management": false, 00:11:10.318 "zone_append": false, 00:11:10.318 "compare": false, 00:11:10.318 "compare_and_write": false, 00:11:10.318 "abort": false, 00:11:10.318 "seek_hole": false, 00:11:10.318 "seek_data": false, 00:11:10.318 "copy": false, 00:11:10.318 "nvme_iov_md": false 00:11:10.318 }, 00:11:10.318 "memory_domains": [ 00:11:10.318 { 00:11:10.318 "dma_device_id": "system", 00:11:10.318 "dma_device_type": 1 00:11:10.318 }, 00:11:10.318 { 00:11:10.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.318 "dma_device_type": 2 00:11:10.318 }, 00:11:10.318 { 00:11:10.318 "dma_device_id": "system", 00:11:10.318 "dma_device_type": 1 00:11:10.318 }, 00:11:10.318 { 00:11:10.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.318 "dma_device_type": 2 00:11:10.318 }, 00:11:10.318 { 00:11:10.318 "dma_device_id": "system", 00:11:10.318 "dma_device_type": 1 00:11:10.318 }, 00:11:10.318 { 00:11:10.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.318 "dma_device_type": 2 00:11:10.318 }, 00:11:10.318 { 00:11:10.318 "dma_device_id": "system", 00:11:10.318 "dma_device_type": 1 00:11:10.318 }, 00:11:10.318 { 00:11:10.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:10.318 "dma_device_type": 2 00:11:10.318 } 00:11:10.318 ], 00:11:10.318 "driver_specific": { 00:11:10.318 "raid": { 00:11:10.318 "uuid": "922f3ed0-3e60-4233-a822-bfe82bf2ce7e", 00:11:10.318 "strip_size_kb": 0, 00:11:10.318 "state": "online", 00:11:10.318 "raid_level": "raid1", 00:11:10.318 "superblock": false, 00:11:10.318 "num_base_bdevs": 4, 00:11:10.318 "num_base_bdevs_discovered": 4, 00:11:10.318 "num_base_bdevs_operational": 4, 00:11:10.318 "base_bdevs_list": [ 00:11:10.318 { 00:11:10.318 "name": "NewBaseBdev", 00:11:10.318 "uuid": "d4fc51a6-757a-474a-9d36-d829e83d7fb8", 00:11:10.318 "is_configured": true, 00:11:10.318 "data_offset": 0, 00:11:10.318 "data_size": 65536 00:11:10.318 }, 00:11:10.318 { 00:11:10.318 "name": "BaseBdev2", 00:11:10.319 "uuid": "173759ca-7a7a-4e65-9bc3-321badff1d99", 00:11:10.319 "is_configured": true, 00:11:10.319 "data_offset": 0, 00:11:10.319 "data_size": 65536 00:11:10.319 }, 00:11:10.319 { 00:11:10.319 "name": "BaseBdev3", 00:11:10.319 "uuid": "9025d166-256b-43c9-9936-757c5644d564", 00:11:10.319 "is_configured": true, 00:11:10.319 "data_offset": 0, 00:11:10.319 "data_size": 65536 00:11:10.319 }, 00:11:10.319 { 00:11:10.319 "name": "BaseBdev4", 00:11:10.319 "uuid": "6eecc0d5-e57e-44fb-99ec-7490d2aa43ec", 00:11:10.319 "is_configured": true, 00:11:10.319 "data_offset": 0, 00:11:10.319 "data_size": 65536 00:11:10.319 } 00:11:10.319 ] 00:11:10.319 } 00:11:10.319 } 00:11:10.319 }' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:10.319 BaseBdev2 00:11:10.319 BaseBdev3 00:11:10.319 BaseBdev4' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.319 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.578 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.578 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.578 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.578 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.578 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.578 [2024-11-16 18:51:53.811790] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.578 [2024-11-16 18:51:53.811824] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.578 [2024-11-16 18:51:53.811896] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.578 [2024-11-16 18:51:53.812172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.579 [2024-11-16 18:51:53.812184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72951 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72951 ']' 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72951 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72951 00:11:10.579 killing process with pid 72951 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72951' 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72951 00:11:10.579 [2024-11-16 18:51:53.860021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.579 18:51:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72951 00:11:10.837 [2024-11-16 18:51:54.242750] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:12.216 00:11:12.216 real 0m11.083s 00:11:12.216 user 0m17.631s 00:11:12.216 sys 0m1.880s 00:11:12.216 ************************************ 00:11:12.216 END TEST raid_state_function_test 00:11:12.216 ************************************ 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.216 18:51:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:11:12.216 18:51:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.216 18:51:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.216 18:51:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.216 ************************************ 00:11:12.216 START TEST raid_state_function_test_sb 00:11:12.216 ************************************ 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73618 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73618' 00:11:12.216 Process raid pid: 73618 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73618 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73618 ']' 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.216 18:51:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.216 [2024-11-16 18:51:55.484594] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:12.216 [2024-11-16 18:51:55.484765] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.216 [2024-11-16 18:51:55.642844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.476 [2024-11-16 18:51:55.750924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.736 [2024-11-16 18:51:55.955254] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.736 [2024-11-16 18:51:55.955293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.061 [2024-11-16 18:51:56.310107] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.061 [2024-11-16 18:51:56.310164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.061 [2024-11-16 18:51:56.310175] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.061 [2024-11-16 18:51:56.310185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.061 [2024-11-16 18:51:56.310191] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.061 [2024-11-16 18:51:56.310200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.061 [2024-11-16 18:51:56.310206] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.061 [2024-11-16 18:51:56.310214] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.061 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.061 "name": "Existed_Raid", 00:11:13.061 "uuid": "0203a90d-6073-48d1-972f-980bf186606b", 00:11:13.061 "strip_size_kb": 0, 00:11:13.061 "state": "configuring", 00:11:13.061 "raid_level": "raid1", 00:11:13.061 "superblock": true, 00:11:13.061 "num_base_bdevs": 4, 00:11:13.061 "num_base_bdevs_discovered": 0, 00:11:13.061 "num_base_bdevs_operational": 4, 00:11:13.061 "base_bdevs_list": [ 00:11:13.061 { 00:11:13.061 "name": "BaseBdev1", 00:11:13.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.061 "is_configured": false, 00:11:13.061 "data_offset": 0, 00:11:13.061 "data_size": 0 00:11:13.061 }, 00:11:13.061 { 00:11:13.061 "name": "BaseBdev2", 00:11:13.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.061 "is_configured": false, 00:11:13.061 "data_offset": 0, 00:11:13.061 "data_size": 0 00:11:13.061 }, 00:11:13.061 { 00:11:13.061 "name": "BaseBdev3", 00:11:13.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.061 "is_configured": false, 00:11:13.061 "data_offset": 0, 00:11:13.061 "data_size": 0 00:11:13.061 }, 00:11:13.061 { 00:11:13.061 "name": "BaseBdev4", 00:11:13.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.061 "is_configured": false, 00:11:13.061 "data_offset": 0, 00:11:13.061 "data_size": 0 00:11:13.061 } 00:11:13.061 ] 00:11:13.061 }' 00:11:13.062 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.062 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.322 [2024-11-16 18:51:56.677446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.322 [2024-11-16 18:51:56.677538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.322 [2024-11-16 18:51:56.689427] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.322 [2024-11-16 18:51:56.689510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.322 [2024-11-16 18:51:56.689540] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.322 [2024-11-16 18:51:56.689562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.322 [2024-11-16 18:51:56.689580] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.322 [2024-11-16 18:51:56.689608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.322 [2024-11-16 18:51:56.689626] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.322 [2024-11-16 18:51:56.689673] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.322 [2024-11-16 18:51:56.737218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.322 BaseBdev1 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.322 [ 00:11:13.322 { 00:11:13.322 "name": "BaseBdev1", 00:11:13.322 "aliases": [ 00:11:13.322 "4311b6bd-688d-4d6f-a128-128cfc9f5bff" 00:11:13.322 ], 00:11:13.322 "product_name": "Malloc disk", 00:11:13.322 "block_size": 512, 00:11:13.322 "num_blocks": 65536, 00:11:13.322 "uuid": "4311b6bd-688d-4d6f-a128-128cfc9f5bff", 00:11:13.322 "assigned_rate_limits": { 00:11:13.322 "rw_ios_per_sec": 0, 00:11:13.322 "rw_mbytes_per_sec": 0, 00:11:13.322 "r_mbytes_per_sec": 0, 00:11:13.322 "w_mbytes_per_sec": 0 00:11:13.322 }, 00:11:13.322 "claimed": true, 00:11:13.322 "claim_type": "exclusive_write", 00:11:13.322 "zoned": false, 00:11:13.322 "supported_io_types": { 00:11:13.322 "read": true, 00:11:13.322 "write": true, 00:11:13.322 "unmap": true, 00:11:13.322 "flush": true, 00:11:13.322 "reset": true, 00:11:13.322 "nvme_admin": false, 00:11:13.322 "nvme_io": false, 00:11:13.322 "nvme_io_md": false, 00:11:13.322 "write_zeroes": true, 00:11:13.322 "zcopy": true, 00:11:13.322 "get_zone_info": false, 00:11:13.322 "zone_management": false, 00:11:13.322 "zone_append": false, 00:11:13.322 "compare": false, 00:11:13.322 "compare_and_write": false, 00:11:13.322 "abort": true, 00:11:13.322 "seek_hole": false, 00:11:13.322 "seek_data": false, 00:11:13.322 "copy": true, 00:11:13.322 "nvme_iov_md": false 00:11:13.322 }, 00:11:13.322 "memory_domains": [ 00:11:13.322 { 00:11:13.322 "dma_device_id": "system", 00:11:13.322 "dma_device_type": 1 00:11:13.322 }, 00:11:13.322 { 00:11:13.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.322 "dma_device_type": 2 00:11:13.322 } 00:11:13.322 ], 00:11:13.322 "driver_specific": {} 00:11:13.322 } 00:11:13.322 ] 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.322 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.582 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.582 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.582 "name": "Existed_Raid", 00:11:13.582 "uuid": "1813b629-1669-443f-bd39-4b7afe79c2b7", 00:11:13.582 "strip_size_kb": 0, 00:11:13.582 "state": "configuring", 00:11:13.582 "raid_level": "raid1", 00:11:13.582 "superblock": true, 00:11:13.582 "num_base_bdevs": 4, 00:11:13.582 "num_base_bdevs_discovered": 1, 00:11:13.582 "num_base_bdevs_operational": 4, 00:11:13.582 "base_bdevs_list": [ 00:11:13.582 { 00:11:13.582 "name": "BaseBdev1", 00:11:13.582 "uuid": "4311b6bd-688d-4d6f-a128-128cfc9f5bff", 00:11:13.582 "is_configured": true, 00:11:13.582 "data_offset": 2048, 00:11:13.582 "data_size": 63488 00:11:13.582 }, 00:11:13.582 { 00:11:13.582 "name": "BaseBdev2", 00:11:13.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.582 "is_configured": false, 00:11:13.582 "data_offset": 0, 00:11:13.582 "data_size": 0 00:11:13.582 }, 00:11:13.582 { 00:11:13.582 "name": "BaseBdev3", 00:11:13.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.582 "is_configured": false, 00:11:13.582 "data_offset": 0, 00:11:13.582 "data_size": 0 00:11:13.582 }, 00:11:13.582 { 00:11:13.582 "name": "BaseBdev4", 00:11:13.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.582 "is_configured": false, 00:11:13.582 "data_offset": 0, 00:11:13.582 "data_size": 0 00:11:13.582 } 00:11:13.582 ] 00:11:13.582 }' 00:11:13.582 18:51:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.582 18:51:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.842 [2024-11-16 18:51:57.204477] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.842 [2024-11-16 18:51:57.204537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.842 [2024-11-16 18:51:57.216504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.842 [2024-11-16 18:51:57.218335] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.842 [2024-11-16 18:51:57.218392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.842 [2024-11-16 18:51:57.218402] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.842 [2024-11-16 18:51:57.218412] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.842 [2024-11-16 18:51:57.218419] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.842 [2024-11-16 18:51:57.218428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.842 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.843 "name": "Existed_Raid", 00:11:13.843 "uuid": "a49a98b2-eced-4003-b517-519c0eb1d57c", 00:11:13.843 "strip_size_kb": 0, 00:11:13.843 "state": "configuring", 00:11:13.843 "raid_level": "raid1", 00:11:13.843 "superblock": true, 00:11:13.843 "num_base_bdevs": 4, 00:11:13.843 "num_base_bdevs_discovered": 1, 00:11:13.843 "num_base_bdevs_operational": 4, 00:11:13.843 "base_bdevs_list": [ 00:11:13.843 { 00:11:13.843 "name": "BaseBdev1", 00:11:13.843 "uuid": "4311b6bd-688d-4d6f-a128-128cfc9f5bff", 00:11:13.843 "is_configured": true, 00:11:13.843 "data_offset": 2048, 00:11:13.843 "data_size": 63488 00:11:13.843 }, 00:11:13.843 { 00:11:13.843 "name": "BaseBdev2", 00:11:13.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.843 "is_configured": false, 00:11:13.843 "data_offset": 0, 00:11:13.843 "data_size": 0 00:11:13.843 }, 00:11:13.843 { 00:11:13.843 "name": "BaseBdev3", 00:11:13.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.843 "is_configured": false, 00:11:13.843 "data_offset": 0, 00:11:13.843 "data_size": 0 00:11:13.843 }, 00:11:13.843 { 00:11:13.843 "name": "BaseBdev4", 00:11:13.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.843 "is_configured": false, 00:11:13.843 "data_offset": 0, 00:11:13.843 "data_size": 0 00:11:13.843 } 00:11:13.843 ] 00:11:13.843 }' 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.843 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.412 [2024-11-16 18:51:57.656714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.412 BaseBdev2 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.412 [ 00:11:14.412 { 00:11:14.412 "name": "BaseBdev2", 00:11:14.412 "aliases": [ 00:11:14.412 "a1efda8f-0067-47c3-af96-42773f08c033" 00:11:14.412 ], 00:11:14.412 "product_name": "Malloc disk", 00:11:14.412 "block_size": 512, 00:11:14.412 "num_blocks": 65536, 00:11:14.412 "uuid": "a1efda8f-0067-47c3-af96-42773f08c033", 00:11:14.412 "assigned_rate_limits": { 00:11:14.412 "rw_ios_per_sec": 0, 00:11:14.412 "rw_mbytes_per_sec": 0, 00:11:14.412 "r_mbytes_per_sec": 0, 00:11:14.412 "w_mbytes_per_sec": 0 00:11:14.412 }, 00:11:14.412 "claimed": true, 00:11:14.412 "claim_type": "exclusive_write", 00:11:14.412 "zoned": false, 00:11:14.412 "supported_io_types": { 00:11:14.412 "read": true, 00:11:14.412 "write": true, 00:11:14.412 "unmap": true, 00:11:14.412 "flush": true, 00:11:14.412 "reset": true, 00:11:14.412 "nvme_admin": false, 00:11:14.412 "nvme_io": false, 00:11:14.412 "nvme_io_md": false, 00:11:14.412 "write_zeroes": true, 00:11:14.412 "zcopy": true, 00:11:14.412 "get_zone_info": false, 00:11:14.412 "zone_management": false, 00:11:14.412 "zone_append": false, 00:11:14.412 "compare": false, 00:11:14.412 "compare_and_write": false, 00:11:14.412 "abort": true, 00:11:14.412 "seek_hole": false, 00:11:14.412 "seek_data": false, 00:11:14.412 "copy": true, 00:11:14.412 "nvme_iov_md": false 00:11:14.412 }, 00:11:14.412 "memory_domains": [ 00:11:14.412 { 00:11:14.412 "dma_device_id": "system", 00:11:14.412 "dma_device_type": 1 00:11:14.412 }, 00:11:14.412 { 00:11:14.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.412 "dma_device_type": 2 00:11:14.412 } 00:11:14.412 ], 00:11:14.412 "driver_specific": {} 00:11:14.412 } 00:11:14.412 ] 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.412 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.412 "name": "Existed_Raid", 00:11:14.412 "uuid": "a49a98b2-eced-4003-b517-519c0eb1d57c", 00:11:14.412 "strip_size_kb": 0, 00:11:14.412 "state": "configuring", 00:11:14.412 "raid_level": "raid1", 00:11:14.412 "superblock": true, 00:11:14.412 "num_base_bdevs": 4, 00:11:14.412 "num_base_bdevs_discovered": 2, 00:11:14.413 "num_base_bdevs_operational": 4, 00:11:14.413 "base_bdevs_list": [ 00:11:14.413 { 00:11:14.413 "name": "BaseBdev1", 00:11:14.413 "uuid": "4311b6bd-688d-4d6f-a128-128cfc9f5bff", 00:11:14.413 "is_configured": true, 00:11:14.413 "data_offset": 2048, 00:11:14.413 "data_size": 63488 00:11:14.413 }, 00:11:14.413 { 00:11:14.413 "name": "BaseBdev2", 00:11:14.413 "uuid": "a1efda8f-0067-47c3-af96-42773f08c033", 00:11:14.413 "is_configured": true, 00:11:14.413 "data_offset": 2048, 00:11:14.413 "data_size": 63488 00:11:14.413 }, 00:11:14.413 { 00:11:14.413 "name": "BaseBdev3", 00:11:14.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.413 "is_configured": false, 00:11:14.413 "data_offset": 0, 00:11:14.413 "data_size": 0 00:11:14.413 }, 00:11:14.413 { 00:11:14.413 "name": "BaseBdev4", 00:11:14.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.413 "is_configured": false, 00:11:14.413 "data_offset": 0, 00:11:14.413 "data_size": 0 00:11:14.413 } 00:11:14.413 ] 00:11:14.413 }' 00:11:14.413 18:51:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.413 18:51:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.672 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.672 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.672 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.933 [2024-11-16 18:51:58.177743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:14.933 BaseBdev3 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.933 [ 00:11:14.933 { 00:11:14.933 "name": "BaseBdev3", 00:11:14.933 "aliases": [ 00:11:14.933 "d942292c-c62b-4e1a-8dbc-78f4862430e3" 00:11:14.933 ], 00:11:14.933 "product_name": "Malloc disk", 00:11:14.933 "block_size": 512, 00:11:14.933 "num_blocks": 65536, 00:11:14.933 "uuid": "d942292c-c62b-4e1a-8dbc-78f4862430e3", 00:11:14.933 "assigned_rate_limits": { 00:11:14.933 "rw_ios_per_sec": 0, 00:11:14.933 "rw_mbytes_per_sec": 0, 00:11:14.933 "r_mbytes_per_sec": 0, 00:11:14.933 "w_mbytes_per_sec": 0 00:11:14.933 }, 00:11:14.933 "claimed": true, 00:11:14.933 "claim_type": "exclusive_write", 00:11:14.933 "zoned": false, 00:11:14.933 "supported_io_types": { 00:11:14.933 "read": true, 00:11:14.933 "write": true, 00:11:14.933 "unmap": true, 00:11:14.933 "flush": true, 00:11:14.933 "reset": true, 00:11:14.933 "nvme_admin": false, 00:11:14.933 "nvme_io": false, 00:11:14.933 "nvme_io_md": false, 00:11:14.933 "write_zeroes": true, 00:11:14.933 "zcopy": true, 00:11:14.933 "get_zone_info": false, 00:11:14.933 "zone_management": false, 00:11:14.933 "zone_append": false, 00:11:14.933 "compare": false, 00:11:14.933 "compare_and_write": false, 00:11:14.933 "abort": true, 00:11:14.933 "seek_hole": false, 00:11:14.933 "seek_data": false, 00:11:14.933 "copy": true, 00:11:14.933 "nvme_iov_md": false 00:11:14.933 }, 00:11:14.933 "memory_domains": [ 00:11:14.933 { 00:11:14.933 "dma_device_id": "system", 00:11:14.933 "dma_device_type": 1 00:11:14.933 }, 00:11:14.933 { 00:11:14.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.933 "dma_device_type": 2 00:11:14.933 } 00:11:14.933 ], 00:11:14.933 "driver_specific": {} 00:11:14.933 } 00:11:14.933 ] 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.933 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.934 "name": "Existed_Raid", 00:11:14.934 "uuid": "a49a98b2-eced-4003-b517-519c0eb1d57c", 00:11:14.934 "strip_size_kb": 0, 00:11:14.934 "state": "configuring", 00:11:14.934 "raid_level": "raid1", 00:11:14.934 "superblock": true, 00:11:14.934 "num_base_bdevs": 4, 00:11:14.934 "num_base_bdevs_discovered": 3, 00:11:14.934 "num_base_bdevs_operational": 4, 00:11:14.934 "base_bdevs_list": [ 00:11:14.934 { 00:11:14.934 "name": "BaseBdev1", 00:11:14.934 "uuid": "4311b6bd-688d-4d6f-a128-128cfc9f5bff", 00:11:14.934 "is_configured": true, 00:11:14.934 "data_offset": 2048, 00:11:14.934 "data_size": 63488 00:11:14.934 }, 00:11:14.934 { 00:11:14.934 "name": "BaseBdev2", 00:11:14.934 "uuid": "a1efda8f-0067-47c3-af96-42773f08c033", 00:11:14.934 "is_configured": true, 00:11:14.934 "data_offset": 2048, 00:11:14.934 "data_size": 63488 00:11:14.934 }, 00:11:14.934 { 00:11:14.934 "name": "BaseBdev3", 00:11:14.934 "uuid": "d942292c-c62b-4e1a-8dbc-78f4862430e3", 00:11:14.934 "is_configured": true, 00:11:14.934 "data_offset": 2048, 00:11:14.934 "data_size": 63488 00:11:14.934 }, 00:11:14.934 { 00:11:14.934 "name": "BaseBdev4", 00:11:14.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.934 "is_configured": false, 00:11:14.934 "data_offset": 0, 00:11:14.934 "data_size": 0 00:11:14.934 } 00:11:14.934 ] 00:11:14.934 }' 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.934 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.193 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:15.193 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.193 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.453 [2024-11-16 18:51:58.700434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.453 [2024-11-16 18:51:58.700812] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.453 [2024-11-16 18:51:58.700865] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:15.453 [2024-11-16 18:51:58.701159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:15.453 [2024-11-16 18:51:58.701374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.453 [2024-11-16 18:51:58.701423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:15.453 BaseBdev4 00:11:15.453 [2024-11-16 18:51:58.701622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.453 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.453 [ 00:11:15.453 { 00:11:15.453 "name": "BaseBdev4", 00:11:15.453 "aliases": [ 00:11:15.453 "6f40de1b-a11f-4edf-8539-87e6ff7634d4" 00:11:15.453 ], 00:11:15.453 "product_name": "Malloc disk", 00:11:15.453 "block_size": 512, 00:11:15.454 "num_blocks": 65536, 00:11:15.454 "uuid": "6f40de1b-a11f-4edf-8539-87e6ff7634d4", 00:11:15.454 "assigned_rate_limits": { 00:11:15.454 "rw_ios_per_sec": 0, 00:11:15.454 "rw_mbytes_per_sec": 0, 00:11:15.454 "r_mbytes_per_sec": 0, 00:11:15.454 "w_mbytes_per_sec": 0 00:11:15.454 }, 00:11:15.454 "claimed": true, 00:11:15.454 "claim_type": "exclusive_write", 00:11:15.454 "zoned": false, 00:11:15.454 "supported_io_types": { 00:11:15.454 "read": true, 00:11:15.454 "write": true, 00:11:15.454 "unmap": true, 00:11:15.454 "flush": true, 00:11:15.454 "reset": true, 00:11:15.454 "nvme_admin": false, 00:11:15.454 "nvme_io": false, 00:11:15.454 "nvme_io_md": false, 00:11:15.454 "write_zeroes": true, 00:11:15.454 "zcopy": true, 00:11:15.454 "get_zone_info": false, 00:11:15.454 "zone_management": false, 00:11:15.454 "zone_append": false, 00:11:15.454 "compare": false, 00:11:15.454 "compare_and_write": false, 00:11:15.454 "abort": true, 00:11:15.454 "seek_hole": false, 00:11:15.454 "seek_data": false, 00:11:15.454 "copy": true, 00:11:15.454 "nvme_iov_md": false 00:11:15.454 }, 00:11:15.454 "memory_domains": [ 00:11:15.454 { 00:11:15.454 "dma_device_id": "system", 00:11:15.454 "dma_device_type": 1 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.454 "dma_device_type": 2 00:11:15.454 } 00:11:15.454 ], 00:11:15.454 "driver_specific": {} 00:11:15.454 } 00:11:15.454 ] 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.454 "name": "Existed_Raid", 00:11:15.454 "uuid": "a49a98b2-eced-4003-b517-519c0eb1d57c", 00:11:15.454 "strip_size_kb": 0, 00:11:15.454 "state": "online", 00:11:15.454 "raid_level": "raid1", 00:11:15.454 "superblock": true, 00:11:15.454 "num_base_bdevs": 4, 00:11:15.454 "num_base_bdevs_discovered": 4, 00:11:15.454 "num_base_bdevs_operational": 4, 00:11:15.454 "base_bdevs_list": [ 00:11:15.454 { 00:11:15.454 "name": "BaseBdev1", 00:11:15.454 "uuid": "4311b6bd-688d-4d6f-a128-128cfc9f5bff", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 2048, 00:11:15.454 "data_size": 63488 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "name": "BaseBdev2", 00:11:15.454 "uuid": "a1efda8f-0067-47c3-af96-42773f08c033", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 2048, 00:11:15.454 "data_size": 63488 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "name": "BaseBdev3", 00:11:15.454 "uuid": "d942292c-c62b-4e1a-8dbc-78f4862430e3", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 2048, 00:11:15.454 "data_size": 63488 00:11:15.454 }, 00:11:15.454 { 00:11:15.454 "name": "BaseBdev4", 00:11:15.454 "uuid": "6f40de1b-a11f-4edf-8539-87e6ff7634d4", 00:11:15.454 "is_configured": true, 00:11:15.454 "data_offset": 2048, 00:11:15.454 "data_size": 63488 00:11:15.454 } 00:11:15.454 ] 00:11:15.454 }' 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.454 18:51:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.714 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.714 [2024-11-16 18:51:59.176062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:15.973 "name": "Existed_Raid", 00:11:15.973 "aliases": [ 00:11:15.973 "a49a98b2-eced-4003-b517-519c0eb1d57c" 00:11:15.973 ], 00:11:15.973 "product_name": "Raid Volume", 00:11:15.973 "block_size": 512, 00:11:15.973 "num_blocks": 63488, 00:11:15.973 "uuid": "a49a98b2-eced-4003-b517-519c0eb1d57c", 00:11:15.973 "assigned_rate_limits": { 00:11:15.973 "rw_ios_per_sec": 0, 00:11:15.973 "rw_mbytes_per_sec": 0, 00:11:15.973 "r_mbytes_per_sec": 0, 00:11:15.973 "w_mbytes_per_sec": 0 00:11:15.973 }, 00:11:15.973 "claimed": false, 00:11:15.973 "zoned": false, 00:11:15.973 "supported_io_types": { 00:11:15.973 "read": true, 00:11:15.973 "write": true, 00:11:15.973 "unmap": false, 00:11:15.973 "flush": false, 00:11:15.973 "reset": true, 00:11:15.973 "nvme_admin": false, 00:11:15.973 "nvme_io": false, 00:11:15.973 "nvme_io_md": false, 00:11:15.973 "write_zeroes": true, 00:11:15.973 "zcopy": false, 00:11:15.973 "get_zone_info": false, 00:11:15.973 "zone_management": false, 00:11:15.973 "zone_append": false, 00:11:15.973 "compare": false, 00:11:15.973 "compare_and_write": false, 00:11:15.973 "abort": false, 00:11:15.973 "seek_hole": false, 00:11:15.973 "seek_data": false, 00:11:15.973 "copy": false, 00:11:15.973 "nvme_iov_md": false 00:11:15.973 }, 00:11:15.973 "memory_domains": [ 00:11:15.973 { 00:11:15.973 "dma_device_id": "system", 00:11:15.973 "dma_device_type": 1 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.973 "dma_device_type": 2 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "dma_device_id": "system", 00:11:15.973 "dma_device_type": 1 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.973 "dma_device_type": 2 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "dma_device_id": "system", 00:11:15.973 "dma_device_type": 1 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.973 "dma_device_type": 2 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "dma_device_id": "system", 00:11:15.973 "dma_device_type": 1 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.973 "dma_device_type": 2 00:11:15.973 } 00:11:15.973 ], 00:11:15.973 "driver_specific": { 00:11:15.973 "raid": { 00:11:15.973 "uuid": "a49a98b2-eced-4003-b517-519c0eb1d57c", 00:11:15.973 "strip_size_kb": 0, 00:11:15.973 "state": "online", 00:11:15.973 "raid_level": "raid1", 00:11:15.973 "superblock": true, 00:11:15.973 "num_base_bdevs": 4, 00:11:15.973 "num_base_bdevs_discovered": 4, 00:11:15.973 "num_base_bdevs_operational": 4, 00:11:15.973 "base_bdevs_list": [ 00:11:15.973 { 00:11:15.973 "name": "BaseBdev1", 00:11:15.973 "uuid": "4311b6bd-688d-4d6f-a128-128cfc9f5bff", 00:11:15.973 "is_configured": true, 00:11:15.973 "data_offset": 2048, 00:11:15.973 "data_size": 63488 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "name": "BaseBdev2", 00:11:15.973 "uuid": "a1efda8f-0067-47c3-af96-42773f08c033", 00:11:15.973 "is_configured": true, 00:11:15.973 "data_offset": 2048, 00:11:15.973 "data_size": 63488 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "name": "BaseBdev3", 00:11:15.973 "uuid": "d942292c-c62b-4e1a-8dbc-78f4862430e3", 00:11:15.973 "is_configured": true, 00:11:15.973 "data_offset": 2048, 00:11:15.973 "data_size": 63488 00:11:15.973 }, 00:11:15.973 { 00:11:15.973 "name": "BaseBdev4", 00:11:15.973 "uuid": "6f40de1b-a11f-4edf-8539-87e6ff7634d4", 00:11:15.973 "is_configured": true, 00:11:15.973 "data_offset": 2048, 00:11:15.973 "data_size": 63488 00:11:15.973 } 00:11:15.973 ] 00:11:15.973 } 00:11:15.973 } 00:11:15.973 }' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:15.973 BaseBdev2 00:11:15.973 BaseBdev3 00:11:15.973 BaseBdev4' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:15.973 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.233 [2024-11-16 18:51:59.471220] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.233 "name": "Existed_Raid", 00:11:16.233 "uuid": "a49a98b2-eced-4003-b517-519c0eb1d57c", 00:11:16.233 "strip_size_kb": 0, 00:11:16.233 "state": "online", 00:11:16.233 "raid_level": "raid1", 00:11:16.233 "superblock": true, 00:11:16.233 "num_base_bdevs": 4, 00:11:16.233 "num_base_bdevs_discovered": 3, 00:11:16.233 "num_base_bdevs_operational": 3, 00:11:16.233 "base_bdevs_list": [ 00:11:16.233 { 00:11:16.233 "name": null, 00:11:16.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.233 "is_configured": false, 00:11:16.233 "data_offset": 0, 00:11:16.233 "data_size": 63488 00:11:16.233 }, 00:11:16.233 { 00:11:16.233 "name": "BaseBdev2", 00:11:16.233 "uuid": "a1efda8f-0067-47c3-af96-42773f08c033", 00:11:16.233 "is_configured": true, 00:11:16.233 "data_offset": 2048, 00:11:16.233 "data_size": 63488 00:11:16.233 }, 00:11:16.233 { 00:11:16.233 "name": "BaseBdev3", 00:11:16.233 "uuid": "d942292c-c62b-4e1a-8dbc-78f4862430e3", 00:11:16.233 "is_configured": true, 00:11:16.233 "data_offset": 2048, 00:11:16.233 "data_size": 63488 00:11:16.233 }, 00:11:16.233 { 00:11:16.233 "name": "BaseBdev4", 00:11:16.233 "uuid": "6f40de1b-a11f-4edf-8539-87e6ff7634d4", 00:11:16.233 "is_configured": true, 00:11:16.233 "data_offset": 2048, 00:11:16.233 "data_size": 63488 00:11:16.233 } 00:11:16.233 ] 00:11:16.233 }' 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.233 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:16.803 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.803 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.803 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.803 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 18:51:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.803 18:51:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 [2024-11-16 18:52:00.031829] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.803 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.803 [2024-11-16 18:52:00.180458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.063 [2024-11-16 18:52:00.329583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:17.063 [2024-11-16 18:52:00.329751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.063 [2024-11-16 18:52:00.423117] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.063 [2024-11-16 18:52:00.423244] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.063 [2024-11-16 18:52:00.423287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:17.063 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.064 BaseBdev2 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.064 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.324 [ 00:11:17.324 { 00:11:17.324 "name": "BaseBdev2", 00:11:17.324 "aliases": [ 00:11:17.324 "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e" 00:11:17.324 ], 00:11:17.324 "product_name": "Malloc disk", 00:11:17.324 "block_size": 512, 00:11:17.324 "num_blocks": 65536, 00:11:17.324 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:17.324 "assigned_rate_limits": { 00:11:17.324 "rw_ios_per_sec": 0, 00:11:17.324 "rw_mbytes_per_sec": 0, 00:11:17.324 "r_mbytes_per_sec": 0, 00:11:17.324 "w_mbytes_per_sec": 0 00:11:17.324 }, 00:11:17.324 "claimed": false, 00:11:17.324 "zoned": false, 00:11:17.324 "supported_io_types": { 00:11:17.324 "read": true, 00:11:17.324 "write": true, 00:11:17.324 "unmap": true, 00:11:17.324 "flush": true, 00:11:17.324 "reset": true, 00:11:17.324 "nvme_admin": false, 00:11:17.324 "nvme_io": false, 00:11:17.324 "nvme_io_md": false, 00:11:17.324 "write_zeroes": true, 00:11:17.324 "zcopy": true, 00:11:17.324 "get_zone_info": false, 00:11:17.324 "zone_management": false, 00:11:17.324 "zone_append": false, 00:11:17.324 "compare": false, 00:11:17.324 "compare_and_write": false, 00:11:17.324 "abort": true, 00:11:17.324 "seek_hole": false, 00:11:17.324 "seek_data": false, 00:11:17.324 "copy": true, 00:11:17.324 "nvme_iov_md": false 00:11:17.324 }, 00:11:17.324 "memory_domains": [ 00:11:17.324 { 00:11:17.324 "dma_device_id": "system", 00:11:17.324 "dma_device_type": 1 00:11:17.324 }, 00:11:17.324 { 00:11:17.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.324 "dma_device_type": 2 00:11:17.324 } 00:11:17.324 ], 00:11:17.324 "driver_specific": {} 00:11:17.324 } 00:11:17.324 ] 00:11:17.324 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.324 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.324 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.324 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.324 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.324 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.324 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.324 BaseBdev3 00:11:17.324 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.325 [ 00:11:17.325 { 00:11:17.325 "name": "BaseBdev3", 00:11:17.325 "aliases": [ 00:11:17.325 "840ce98c-65c7-4b71-a525-552d08010c40" 00:11:17.325 ], 00:11:17.325 "product_name": "Malloc disk", 00:11:17.325 "block_size": 512, 00:11:17.325 "num_blocks": 65536, 00:11:17.325 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:17.325 "assigned_rate_limits": { 00:11:17.325 "rw_ios_per_sec": 0, 00:11:17.325 "rw_mbytes_per_sec": 0, 00:11:17.325 "r_mbytes_per_sec": 0, 00:11:17.325 "w_mbytes_per_sec": 0 00:11:17.325 }, 00:11:17.325 "claimed": false, 00:11:17.325 "zoned": false, 00:11:17.325 "supported_io_types": { 00:11:17.325 "read": true, 00:11:17.325 "write": true, 00:11:17.325 "unmap": true, 00:11:17.325 "flush": true, 00:11:17.325 "reset": true, 00:11:17.325 "nvme_admin": false, 00:11:17.325 "nvme_io": false, 00:11:17.325 "nvme_io_md": false, 00:11:17.325 "write_zeroes": true, 00:11:17.325 "zcopy": true, 00:11:17.325 "get_zone_info": false, 00:11:17.325 "zone_management": false, 00:11:17.325 "zone_append": false, 00:11:17.325 "compare": false, 00:11:17.325 "compare_and_write": false, 00:11:17.325 "abort": true, 00:11:17.325 "seek_hole": false, 00:11:17.325 "seek_data": false, 00:11:17.325 "copy": true, 00:11:17.325 "nvme_iov_md": false 00:11:17.325 }, 00:11:17.325 "memory_domains": [ 00:11:17.325 { 00:11:17.325 "dma_device_id": "system", 00:11:17.325 "dma_device_type": 1 00:11:17.325 }, 00:11:17.325 { 00:11:17.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.325 "dma_device_type": 2 00:11:17.325 } 00:11:17.325 ], 00:11:17.325 "driver_specific": {} 00:11:17.325 } 00:11:17.325 ] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.325 BaseBdev4 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.325 [ 00:11:17.325 { 00:11:17.325 "name": "BaseBdev4", 00:11:17.325 "aliases": [ 00:11:17.325 "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c" 00:11:17.325 ], 00:11:17.325 "product_name": "Malloc disk", 00:11:17.325 "block_size": 512, 00:11:17.325 "num_blocks": 65536, 00:11:17.325 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:17.325 "assigned_rate_limits": { 00:11:17.325 "rw_ios_per_sec": 0, 00:11:17.325 "rw_mbytes_per_sec": 0, 00:11:17.325 "r_mbytes_per_sec": 0, 00:11:17.325 "w_mbytes_per_sec": 0 00:11:17.325 }, 00:11:17.325 "claimed": false, 00:11:17.325 "zoned": false, 00:11:17.325 "supported_io_types": { 00:11:17.325 "read": true, 00:11:17.325 "write": true, 00:11:17.325 "unmap": true, 00:11:17.325 "flush": true, 00:11:17.325 "reset": true, 00:11:17.325 "nvme_admin": false, 00:11:17.325 "nvme_io": false, 00:11:17.325 "nvme_io_md": false, 00:11:17.325 "write_zeroes": true, 00:11:17.325 "zcopy": true, 00:11:17.325 "get_zone_info": false, 00:11:17.325 "zone_management": false, 00:11:17.325 "zone_append": false, 00:11:17.325 "compare": false, 00:11:17.325 "compare_and_write": false, 00:11:17.325 "abort": true, 00:11:17.325 "seek_hole": false, 00:11:17.325 "seek_data": false, 00:11:17.325 "copy": true, 00:11:17.325 "nvme_iov_md": false 00:11:17.325 }, 00:11:17.325 "memory_domains": [ 00:11:17.325 { 00:11:17.325 "dma_device_id": "system", 00:11:17.325 "dma_device_type": 1 00:11:17.325 }, 00:11:17.325 { 00:11:17.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.325 "dma_device_type": 2 00:11:17.325 } 00:11:17.325 ], 00:11:17.325 "driver_specific": {} 00:11:17.325 } 00:11:17.325 ] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.325 [2024-11-16 18:52:00.718365] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.325 [2024-11-16 18:52:00.718451] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.325 [2024-11-16 18:52:00.718509] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.325 [2024-11-16 18:52:00.720267] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.325 [2024-11-16 18:52:00.720357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.325 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.326 "name": "Existed_Raid", 00:11:17.326 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:17.326 "strip_size_kb": 0, 00:11:17.326 "state": "configuring", 00:11:17.326 "raid_level": "raid1", 00:11:17.326 "superblock": true, 00:11:17.326 "num_base_bdevs": 4, 00:11:17.326 "num_base_bdevs_discovered": 3, 00:11:17.326 "num_base_bdevs_operational": 4, 00:11:17.326 "base_bdevs_list": [ 00:11:17.326 { 00:11:17.326 "name": "BaseBdev1", 00:11:17.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.326 "is_configured": false, 00:11:17.326 "data_offset": 0, 00:11:17.326 "data_size": 0 00:11:17.326 }, 00:11:17.326 { 00:11:17.326 "name": "BaseBdev2", 00:11:17.326 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:17.326 "is_configured": true, 00:11:17.326 "data_offset": 2048, 00:11:17.326 "data_size": 63488 00:11:17.326 }, 00:11:17.326 { 00:11:17.326 "name": "BaseBdev3", 00:11:17.326 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:17.326 "is_configured": true, 00:11:17.326 "data_offset": 2048, 00:11:17.326 "data_size": 63488 00:11:17.326 }, 00:11:17.326 { 00:11:17.326 "name": "BaseBdev4", 00:11:17.326 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:17.326 "is_configured": true, 00:11:17.326 "data_offset": 2048, 00:11:17.326 "data_size": 63488 00:11:17.326 } 00:11:17.326 ] 00:11:17.326 }' 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.326 18:52:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.896 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:17.896 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.896 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.896 [2024-11-16 18:52:01.129695] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.896 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.896 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:17.896 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.896 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.897 "name": "Existed_Raid", 00:11:17.897 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:17.897 "strip_size_kb": 0, 00:11:17.897 "state": "configuring", 00:11:17.897 "raid_level": "raid1", 00:11:17.897 "superblock": true, 00:11:17.897 "num_base_bdevs": 4, 00:11:17.897 "num_base_bdevs_discovered": 2, 00:11:17.897 "num_base_bdevs_operational": 4, 00:11:17.897 "base_bdevs_list": [ 00:11:17.897 { 00:11:17.897 "name": "BaseBdev1", 00:11:17.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.897 "is_configured": false, 00:11:17.897 "data_offset": 0, 00:11:17.897 "data_size": 0 00:11:17.897 }, 00:11:17.897 { 00:11:17.897 "name": null, 00:11:17.897 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:17.897 "is_configured": false, 00:11:17.897 "data_offset": 0, 00:11:17.897 "data_size": 63488 00:11:17.897 }, 00:11:17.897 { 00:11:17.897 "name": "BaseBdev3", 00:11:17.897 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:17.897 "is_configured": true, 00:11:17.897 "data_offset": 2048, 00:11:17.897 "data_size": 63488 00:11:17.897 }, 00:11:17.897 { 00:11:17.897 "name": "BaseBdev4", 00:11:17.897 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:17.897 "is_configured": true, 00:11:17.897 "data_offset": 2048, 00:11:17.897 "data_size": 63488 00:11:17.897 } 00:11:17.897 ] 00:11:17.897 }' 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.897 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.157 [2024-11-16 18:52:01.520599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:18.157 BaseBdev1 00:11:18.157 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.158 [ 00:11:18.158 { 00:11:18.158 "name": "BaseBdev1", 00:11:18.158 "aliases": [ 00:11:18.158 "baa708b3-dacf-476b-9da8-4307a79d5812" 00:11:18.158 ], 00:11:18.158 "product_name": "Malloc disk", 00:11:18.158 "block_size": 512, 00:11:18.158 "num_blocks": 65536, 00:11:18.158 "uuid": "baa708b3-dacf-476b-9da8-4307a79d5812", 00:11:18.158 "assigned_rate_limits": { 00:11:18.158 "rw_ios_per_sec": 0, 00:11:18.158 "rw_mbytes_per_sec": 0, 00:11:18.158 "r_mbytes_per_sec": 0, 00:11:18.158 "w_mbytes_per_sec": 0 00:11:18.158 }, 00:11:18.158 "claimed": true, 00:11:18.158 "claim_type": "exclusive_write", 00:11:18.158 "zoned": false, 00:11:18.158 "supported_io_types": { 00:11:18.158 "read": true, 00:11:18.158 "write": true, 00:11:18.158 "unmap": true, 00:11:18.158 "flush": true, 00:11:18.158 "reset": true, 00:11:18.158 "nvme_admin": false, 00:11:18.158 "nvme_io": false, 00:11:18.158 "nvme_io_md": false, 00:11:18.158 "write_zeroes": true, 00:11:18.158 "zcopy": true, 00:11:18.158 "get_zone_info": false, 00:11:18.158 "zone_management": false, 00:11:18.158 "zone_append": false, 00:11:18.158 "compare": false, 00:11:18.158 "compare_and_write": false, 00:11:18.158 "abort": true, 00:11:18.158 "seek_hole": false, 00:11:18.158 "seek_data": false, 00:11:18.158 "copy": true, 00:11:18.158 "nvme_iov_md": false 00:11:18.158 }, 00:11:18.158 "memory_domains": [ 00:11:18.158 { 00:11:18.158 "dma_device_id": "system", 00:11:18.158 "dma_device_type": 1 00:11:18.158 }, 00:11:18.158 { 00:11:18.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.158 "dma_device_type": 2 00:11:18.158 } 00:11:18.158 ], 00:11:18.158 "driver_specific": {} 00:11:18.158 } 00:11:18.158 ] 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.158 "name": "Existed_Raid", 00:11:18.158 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:18.158 "strip_size_kb": 0, 00:11:18.158 "state": "configuring", 00:11:18.158 "raid_level": "raid1", 00:11:18.158 "superblock": true, 00:11:18.158 "num_base_bdevs": 4, 00:11:18.158 "num_base_bdevs_discovered": 3, 00:11:18.158 "num_base_bdevs_operational": 4, 00:11:18.158 "base_bdevs_list": [ 00:11:18.158 { 00:11:18.158 "name": "BaseBdev1", 00:11:18.158 "uuid": "baa708b3-dacf-476b-9da8-4307a79d5812", 00:11:18.158 "is_configured": true, 00:11:18.158 "data_offset": 2048, 00:11:18.158 "data_size": 63488 00:11:18.158 }, 00:11:18.158 { 00:11:18.158 "name": null, 00:11:18.158 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:18.158 "is_configured": false, 00:11:18.158 "data_offset": 0, 00:11:18.158 "data_size": 63488 00:11:18.158 }, 00:11:18.158 { 00:11:18.158 "name": "BaseBdev3", 00:11:18.158 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:18.158 "is_configured": true, 00:11:18.158 "data_offset": 2048, 00:11:18.158 "data_size": 63488 00:11:18.158 }, 00:11:18.158 { 00:11:18.158 "name": "BaseBdev4", 00:11:18.158 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:18.158 "is_configured": true, 00:11:18.158 "data_offset": 2048, 00:11:18.158 "data_size": 63488 00:11:18.158 } 00:11:18.158 ] 00:11:18.158 }' 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.158 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.729 [2024-11-16 18:52:01.959973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.729 "name": "Existed_Raid", 00:11:18.729 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:18.729 "strip_size_kb": 0, 00:11:18.729 "state": "configuring", 00:11:18.729 "raid_level": "raid1", 00:11:18.729 "superblock": true, 00:11:18.729 "num_base_bdevs": 4, 00:11:18.729 "num_base_bdevs_discovered": 2, 00:11:18.729 "num_base_bdevs_operational": 4, 00:11:18.729 "base_bdevs_list": [ 00:11:18.729 { 00:11:18.729 "name": "BaseBdev1", 00:11:18.729 "uuid": "baa708b3-dacf-476b-9da8-4307a79d5812", 00:11:18.729 "is_configured": true, 00:11:18.729 "data_offset": 2048, 00:11:18.729 "data_size": 63488 00:11:18.729 }, 00:11:18.729 { 00:11:18.729 "name": null, 00:11:18.729 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:18.729 "is_configured": false, 00:11:18.729 "data_offset": 0, 00:11:18.729 "data_size": 63488 00:11:18.729 }, 00:11:18.729 { 00:11:18.729 "name": null, 00:11:18.729 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:18.729 "is_configured": false, 00:11:18.729 "data_offset": 0, 00:11:18.729 "data_size": 63488 00:11:18.729 }, 00:11:18.729 { 00:11:18.729 "name": "BaseBdev4", 00:11:18.729 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:18.729 "is_configured": true, 00:11:18.729 "data_offset": 2048, 00:11:18.729 "data_size": 63488 00:11:18.729 } 00:11:18.729 ] 00:11:18.729 }' 00:11:18.729 18:52:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.729 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.989 [2024-11-16 18:52:02.427098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.989 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.249 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.249 "name": "Existed_Raid", 00:11:19.249 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:19.249 "strip_size_kb": 0, 00:11:19.249 "state": "configuring", 00:11:19.249 "raid_level": "raid1", 00:11:19.249 "superblock": true, 00:11:19.249 "num_base_bdevs": 4, 00:11:19.249 "num_base_bdevs_discovered": 3, 00:11:19.249 "num_base_bdevs_operational": 4, 00:11:19.249 "base_bdevs_list": [ 00:11:19.249 { 00:11:19.249 "name": "BaseBdev1", 00:11:19.249 "uuid": "baa708b3-dacf-476b-9da8-4307a79d5812", 00:11:19.249 "is_configured": true, 00:11:19.249 "data_offset": 2048, 00:11:19.249 "data_size": 63488 00:11:19.250 }, 00:11:19.250 { 00:11:19.250 "name": null, 00:11:19.250 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:19.250 "is_configured": false, 00:11:19.250 "data_offset": 0, 00:11:19.250 "data_size": 63488 00:11:19.250 }, 00:11:19.250 { 00:11:19.250 "name": "BaseBdev3", 00:11:19.250 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:19.250 "is_configured": true, 00:11:19.250 "data_offset": 2048, 00:11:19.250 "data_size": 63488 00:11:19.250 }, 00:11:19.250 { 00:11:19.250 "name": "BaseBdev4", 00:11:19.250 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:19.250 "is_configured": true, 00:11:19.250 "data_offset": 2048, 00:11:19.250 "data_size": 63488 00:11:19.250 } 00:11:19.250 ] 00:11:19.250 }' 00:11:19.250 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.250 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.509 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.509 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.509 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.509 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.509 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.510 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:19.510 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:19.510 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.510 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.510 [2024-11-16 18:52:02.906306] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:19.769 18:52:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.769 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:19.769 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.770 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.770 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.770 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.770 18:52:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.770 "name": "Existed_Raid", 00:11:19.770 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:19.770 "strip_size_kb": 0, 00:11:19.770 "state": "configuring", 00:11:19.770 "raid_level": "raid1", 00:11:19.770 "superblock": true, 00:11:19.770 "num_base_bdevs": 4, 00:11:19.770 "num_base_bdevs_discovered": 2, 00:11:19.770 "num_base_bdevs_operational": 4, 00:11:19.770 "base_bdevs_list": [ 00:11:19.770 { 00:11:19.770 "name": null, 00:11:19.770 "uuid": "baa708b3-dacf-476b-9da8-4307a79d5812", 00:11:19.770 "is_configured": false, 00:11:19.770 "data_offset": 0, 00:11:19.770 "data_size": 63488 00:11:19.770 }, 00:11:19.770 { 00:11:19.770 "name": null, 00:11:19.770 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:19.770 "is_configured": false, 00:11:19.770 "data_offset": 0, 00:11:19.770 "data_size": 63488 00:11:19.770 }, 00:11:19.770 { 00:11:19.770 "name": "BaseBdev3", 00:11:19.770 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:19.770 "is_configured": true, 00:11:19.770 "data_offset": 2048, 00:11:19.770 "data_size": 63488 00:11:19.770 }, 00:11:19.770 { 00:11:19.770 "name": "BaseBdev4", 00:11:19.770 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:19.770 "is_configured": true, 00:11:19.770 "data_offset": 2048, 00:11:19.770 "data_size": 63488 00:11:19.770 } 00:11:19.770 ] 00:11:19.770 }' 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.770 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.030 [2024-11-16 18:52:03.479996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.030 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.290 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.290 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.290 "name": "Existed_Raid", 00:11:20.290 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:20.290 "strip_size_kb": 0, 00:11:20.290 "state": "configuring", 00:11:20.290 "raid_level": "raid1", 00:11:20.290 "superblock": true, 00:11:20.290 "num_base_bdevs": 4, 00:11:20.290 "num_base_bdevs_discovered": 3, 00:11:20.290 "num_base_bdevs_operational": 4, 00:11:20.290 "base_bdevs_list": [ 00:11:20.290 { 00:11:20.290 "name": null, 00:11:20.290 "uuid": "baa708b3-dacf-476b-9da8-4307a79d5812", 00:11:20.290 "is_configured": false, 00:11:20.290 "data_offset": 0, 00:11:20.290 "data_size": 63488 00:11:20.290 }, 00:11:20.290 { 00:11:20.290 "name": "BaseBdev2", 00:11:20.290 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:20.290 "is_configured": true, 00:11:20.290 "data_offset": 2048, 00:11:20.290 "data_size": 63488 00:11:20.290 }, 00:11:20.290 { 00:11:20.290 "name": "BaseBdev3", 00:11:20.290 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:20.290 "is_configured": true, 00:11:20.290 "data_offset": 2048, 00:11:20.290 "data_size": 63488 00:11:20.290 }, 00:11:20.290 { 00:11:20.290 "name": "BaseBdev4", 00:11:20.290 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:20.290 "is_configured": true, 00:11:20.290 "data_offset": 2048, 00:11:20.290 "data_size": 63488 00:11:20.290 } 00:11:20.290 ] 00:11:20.290 }' 00:11:20.290 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.290 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.550 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.551 18:52:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u baa708b3-dacf-476b-9da8-4307a79d5812 00:11:20.551 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.551 18:52:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.551 [2024-11-16 18:52:04.001947] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:20.551 [2024-11-16 18:52:04.002248] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:20.551 [2024-11-16 18:52:04.002298] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:20.551 [2024-11-16 18:52:04.002593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:20.551 [2024-11-16 18:52:04.002797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:20.551 [2024-11-16 18:52:04.002841] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:20.551 [2024-11-16 18:52:04.003012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.551 NewBaseBdev 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.551 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.811 [ 00:11:20.811 { 00:11:20.811 "name": "NewBaseBdev", 00:11:20.811 "aliases": [ 00:11:20.811 "baa708b3-dacf-476b-9da8-4307a79d5812" 00:11:20.811 ], 00:11:20.811 "product_name": "Malloc disk", 00:11:20.811 "block_size": 512, 00:11:20.811 "num_blocks": 65536, 00:11:20.811 "uuid": "baa708b3-dacf-476b-9da8-4307a79d5812", 00:11:20.811 "assigned_rate_limits": { 00:11:20.811 "rw_ios_per_sec": 0, 00:11:20.811 "rw_mbytes_per_sec": 0, 00:11:20.811 "r_mbytes_per_sec": 0, 00:11:20.811 "w_mbytes_per_sec": 0 00:11:20.811 }, 00:11:20.811 "claimed": true, 00:11:20.811 "claim_type": "exclusive_write", 00:11:20.811 "zoned": false, 00:11:20.811 "supported_io_types": { 00:11:20.811 "read": true, 00:11:20.811 "write": true, 00:11:20.811 "unmap": true, 00:11:20.811 "flush": true, 00:11:20.811 "reset": true, 00:11:20.811 "nvme_admin": false, 00:11:20.811 "nvme_io": false, 00:11:20.811 "nvme_io_md": false, 00:11:20.811 "write_zeroes": true, 00:11:20.811 "zcopy": true, 00:11:20.811 "get_zone_info": false, 00:11:20.811 "zone_management": false, 00:11:20.811 "zone_append": false, 00:11:20.811 "compare": false, 00:11:20.811 "compare_and_write": false, 00:11:20.811 "abort": true, 00:11:20.811 "seek_hole": false, 00:11:20.811 "seek_data": false, 00:11:20.811 "copy": true, 00:11:20.811 "nvme_iov_md": false 00:11:20.811 }, 00:11:20.811 "memory_domains": [ 00:11:20.811 { 00:11:20.811 "dma_device_id": "system", 00:11:20.811 "dma_device_type": 1 00:11:20.811 }, 00:11:20.811 { 00:11:20.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:20.811 "dma_device_type": 2 00:11:20.811 } 00:11:20.811 ], 00:11:20.811 "driver_specific": {} 00:11:20.811 } 00:11:20.811 ] 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.811 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.811 "name": "Existed_Raid", 00:11:20.811 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:20.811 "strip_size_kb": 0, 00:11:20.811 "state": "online", 00:11:20.811 "raid_level": "raid1", 00:11:20.811 "superblock": true, 00:11:20.811 "num_base_bdevs": 4, 00:11:20.811 "num_base_bdevs_discovered": 4, 00:11:20.811 "num_base_bdevs_operational": 4, 00:11:20.811 "base_bdevs_list": [ 00:11:20.811 { 00:11:20.811 "name": "NewBaseBdev", 00:11:20.811 "uuid": "baa708b3-dacf-476b-9da8-4307a79d5812", 00:11:20.811 "is_configured": true, 00:11:20.811 "data_offset": 2048, 00:11:20.811 "data_size": 63488 00:11:20.811 }, 00:11:20.811 { 00:11:20.811 "name": "BaseBdev2", 00:11:20.811 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:20.811 "is_configured": true, 00:11:20.811 "data_offset": 2048, 00:11:20.812 "data_size": 63488 00:11:20.812 }, 00:11:20.812 { 00:11:20.812 "name": "BaseBdev3", 00:11:20.812 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:20.812 "is_configured": true, 00:11:20.812 "data_offset": 2048, 00:11:20.812 "data_size": 63488 00:11:20.812 }, 00:11:20.812 { 00:11:20.812 "name": "BaseBdev4", 00:11:20.812 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:20.812 "is_configured": true, 00:11:20.812 "data_offset": 2048, 00:11:20.812 "data_size": 63488 00:11:20.812 } 00:11:20.812 ] 00:11:20.812 }' 00:11:20.812 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.812 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:21.074 [2024-11-16 18:52:04.469517] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:21.074 "name": "Existed_Raid", 00:11:21.074 "aliases": [ 00:11:21.074 "90c20532-8c4b-404a-982e-46a968ccaa0e" 00:11:21.074 ], 00:11:21.074 "product_name": "Raid Volume", 00:11:21.074 "block_size": 512, 00:11:21.074 "num_blocks": 63488, 00:11:21.074 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:21.074 "assigned_rate_limits": { 00:11:21.074 "rw_ios_per_sec": 0, 00:11:21.074 "rw_mbytes_per_sec": 0, 00:11:21.074 "r_mbytes_per_sec": 0, 00:11:21.074 "w_mbytes_per_sec": 0 00:11:21.074 }, 00:11:21.074 "claimed": false, 00:11:21.074 "zoned": false, 00:11:21.074 "supported_io_types": { 00:11:21.074 "read": true, 00:11:21.074 "write": true, 00:11:21.074 "unmap": false, 00:11:21.074 "flush": false, 00:11:21.074 "reset": true, 00:11:21.074 "nvme_admin": false, 00:11:21.074 "nvme_io": false, 00:11:21.074 "nvme_io_md": false, 00:11:21.074 "write_zeroes": true, 00:11:21.074 "zcopy": false, 00:11:21.074 "get_zone_info": false, 00:11:21.074 "zone_management": false, 00:11:21.074 "zone_append": false, 00:11:21.074 "compare": false, 00:11:21.074 "compare_and_write": false, 00:11:21.074 "abort": false, 00:11:21.074 "seek_hole": false, 00:11:21.074 "seek_data": false, 00:11:21.074 "copy": false, 00:11:21.074 "nvme_iov_md": false 00:11:21.074 }, 00:11:21.074 "memory_domains": [ 00:11:21.074 { 00:11:21.074 "dma_device_id": "system", 00:11:21.074 "dma_device_type": 1 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.074 "dma_device_type": 2 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "dma_device_id": "system", 00:11:21.074 "dma_device_type": 1 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.074 "dma_device_type": 2 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "dma_device_id": "system", 00:11:21.074 "dma_device_type": 1 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.074 "dma_device_type": 2 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "dma_device_id": "system", 00:11:21.074 "dma_device_type": 1 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.074 "dma_device_type": 2 00:11:21.074 } 00:11:21.074 ], 00:11:21.074 "driver_specific": { 00:11:21.074 "raid": { 00:11:21.074 "uuid": "90c20532-8c4b-404a-982e-46a968ccaa0e", 00:11:21.074 "strip_size_kb": 0, 00:11:21.074 "state": "online", 00:11:21.074 "raid_level": "raid1", 00:11:21.074 "superblock": true, 00:11:21.074 "num_base_bdevs": 4, 00:11:21.074 "num_base_bdevs_discovered": 4, 00:11:21.074 "num_base_bdevs_operational": 4, 00:11:21.074 "base_bdevs_list": [ 00:11:21.074 { 00:11:21.074 "name": "NewBaseBdev", 00:11:21.074 "uuid": "baa708b3-dacf-476b-9da8-4307a79d5812", 00:11:21.074 "is_configured": true, 00:11:21.074 "data_offset": 2048, 00:11:21.074 "data_size": 63488 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "name": "BaseBdev2", 00:11:21.074 "uuid": "9f0e1ecd-2403-4966-b4ab-3f61f4c1013e", 00:11:21.074 "is_configured": true, 00:11:21.074 "data_offset": 2048, 00:11:21.074 "data_size": 63488 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "name": "BaseBdev3", 00:11:21.074 "uuid": "840ce98c-65c7-4b71-a525-552d08010c40", 00:11:21.074 "is_configured": true, 00:11:21.074 "data_offset": 2048, 00:11:21.074 "data_size": 63488 00:11:21.074 }, 00:11:21.074 { 00:11:21.074 "name": "BaseBdev4", 00:11:21.074 "uuid": "221a91e3-aaa3-4c3a-82c7-75ee92eb3f0c", 00:11:21.074 "is_configured": true, 00:11:21.074 "data_offset": 2048, 00:11:21.074 "data_size": 63488 00:11:21.074 } 00:11:21.074 ] 00:11:21.074 } 00:11:21.074 } 00:11:21.074 }' 00:11:21.074 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:21.334 BaseBdev2 00:11:21.334 BaseBdev3 00:11:21.334 BaseBdev4' 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.334 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.335 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.595 [2024-11-16 18:52:04.808587] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:21.595 [2024-11-16 18:52:04.808663] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.595 [2024-11-16 18:52:04.808739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.595 [2024-11-16 18:52:04.809041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.595 [2024-11-16 18:52:04.809054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73618 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73618 ']' 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73618 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73618 00:11:21.595 killing process with pid 73618 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73618' 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73618 00:11:21.595 [2024-11-16 18:52:04.842372] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:21.595 18:52:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73618 00:11:21.857 [2024-11-16 18:52:05.225891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:23.237 18:52:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:23.237 00:11:23.237 real 0m10.921s 00:11:23.237 user 0m17.294s 00:11:23.237 sys 0m1.851s 00:11:23.237 18:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.237 18:52:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.237 ************************************ 00:11:23.237 END TEST raid_state_function_test_sb 00:11:23.237 ************************************ 00:11:23.237 18:52:06 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:11:23.237 18:52:06 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:23.237 18:52:06 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.237 18:52:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:23.237 ************************************ 00:11:23.237 START TEST raid_superblock_test 00:11:23.237 ************************************ 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74283 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74283 00:11:23.237 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74283 ']' 00:11:23.238 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:23.238 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:23.238 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:23.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:23.238 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:23.238 18:52:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.238 [2024-11-16 18:52:06.466117] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:23.238 [2024-11-16 18:52:06.466293] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74283 ] 00:11:23.238 [2024-11-16 18:52:06.641844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.497 [2024-11-16 18:52:06.748538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.497 [2024-11-16 18:52:06.940543] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.497 [2024-11-16 18:52:06.940680] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 malloc1 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 [2024-11-16 18:52:07.352151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:24.067 [2024-11-16 18:52:07.352275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.067 [2024-11-16 18:52:07.352319] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:24.067 [2024-11-16 18:52:07.352350] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.067 [2024-11-16 18:52:07.354445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.067 [2024-11-16 18:52:07.354532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:24.067 pt1 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 malloc2 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 [2024-11-16 18:52:07.411524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:24.067 [2024-11-16 18:52:07.411582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.067 [2024-11-16 18:52:07.411602] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:24.067 [2024-11-16 18:52:07.411611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.067 [2024-11-16 18:52:07.413678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.067 [2024-11-16 18:52:07.413712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:24.067 pt2 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 malloc3 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 [2024-11-16 18:52:07.477688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:24.067 [2024-11-16 18:52:07.477791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.067 [2024-11-16 18:52:07.477827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:24.067 [2024-11-16 18:52:07.477854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.067 [2024-11-16 18:52:07.479924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.067 [2024-11-16 18:52:07.480007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:24.067 pt3 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 malloc4 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.067 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.067 [2024-11-16 18:52:07.537203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:24.067 [2024-11-16 18:52:07.537296] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.067 [2024-11-16 18:52:07.537353] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:24.067 [2024-11-16 18:52:07.537382] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.327 [2024-11-16 18:52:07.539395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.327 [2024-11-16 18:52:07.539463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:24.327 pt4 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.327 [2024-11-16 18:52:07.549221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:24.327 [2024-11-16 18:52:07.550998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:24.327 [2024-11-16 18:52:07.551097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:24.327 [2024-11-16 18:52:07.551167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:24.327 [2024-11-16 18:52:07.551393] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:24.327 [2024-11-16 18:52:07.551441] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:24.327 [2024-11-16 18:52:07.551731] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.327 [2024-11-16 18:52:07.551947] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:24.327 [2024-11-16 18:52:07.551996] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:24.327 [2024-11-16 18:52:07.552178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.327 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.328 "name": "raid_bdev1", 00:11:24.328 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:24.328 "strip_size_kb": 0, 00:11:24.328 "state": "online", 00:11:24.328 "raid_level": "raid1", 00:11:24.328 "superblock": true, 00:11:24.328 "num_base_bdevs": 4, 00:11:24.328 "num_base_bdevs_discovered": 4, 00:11:24.328 "num_base_bdevs_operational": 4, 00:11:24.328 "base_bdevs_list": [ 00:11:24.328 { 00:11:24.328 "name": "pt1", 00:11:24.328 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.328 "is_configured": true, 00:11:24.328 "data_offset": 2048, 00:11:24.328 "data_size": 63488 00:11:24.328 }, 00:11:24.328 { 00:11:24.328 "name": "pt2", 00:11:24.328 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.328 "is_configured": true, 00:11:24.328 "data_offset": 2048, 00:11:24.328 "data_size": 63488 00:11:24.328 }, 00:11:24.328 { 00:11:24.328 "name": "pt3", 00:11:24.328 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.328 "is_configured": true, 00:11:24.328 "data_offset": 2048, 00:11:24.328 "data_size": 63488 00:11:24.328 }, 00:11:24.328 { 00:11:24.328 "name": "pt4", 00:11:24.328 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.328 "is_configured": true, 00:11:24.328 "data_offset": 2048, 00:11:24.328 "data_size": 63488 00:11:24.328 } 00:11:24.328 ] 00:11:24.328 }' 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.328 18:52:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.587 [2024-11-16 18:52:08.032669] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.587 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.847 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.847 "name": "raid_bdev1", 00:11:24.847 "aliases": [ 00:11:24.847 "f2452dd1-1dac-4bc4-a81c-c2314883a11a" 00:11:24.847 ], 00:11:24.847 "product_name": "Raid Volume", 00:11:24.847 "block_size": 512, 00:11:24.847 "num_blocks": 63488, 00:11:24.847 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:24.847 "assigned_rate_limits": { 00:11:24.847 "rw_ios_per_sec": 0, 00:11:24.847 "rw_mbytes_per_sec": 0, 00:11:24.847 "r_mbytes_per_sec": 0, 00:11:24.847 "w_mbytes_per_sec": 0 00:11:24.847 }, 00:11:24.847 "claimed": false, 00:11:24.847 "zoned": false, 00:11:24.847 "supported_io_types": { 00:11:24.847 "read": true, 00:11:24.847 "write": true, 00:11:24.847 "unmap": false, 00:11:24.847 "flush": false, 00:11:24.847 "reset": true, 00:11:24.847 "nvme_admin": false, 00:11:24.847 "nvme_io": false, 00:11:24.847 "nvme_io_md": false, 00:11:24.847 "write_zeroes": true, 00:11:24.847 "zcopy": false, 00:11:24.847 "get_zone_info": false, 00:11:24.847 "zone_management": false, 00:11:24.847 "zone_append": false, 00:11:24.847 "compare": false, 00:11:24.847 "compare_and_write": false, 00:11:24.847 "abort": false, 00:11:24.848 "seek_hole": false, 00:11:24.848 "seek_data": false, 00:11:24.848 "copy": false, 00:11:24.848 "nvme_iov_md": false 00:11:24.848 }, 00:11:24.848 "memory_domains": [ 00:11:24.848 { 00:11:24.848 "dma_device_id": "system", 00:11:24.848 "dma_device_type": 1 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.848 "dma_device_type": 2 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "dma_device_id": "system", 00:11:24.848 "dma_device_type": 1 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.848 "dma_device_type": 2 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "dma_device_id": "system", 00:11:24.848 "dma_device_type": 1 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.848 "dma_device_type": 2 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "dma_device_id": "system", 00:11:24.848 "dma_device_type": 1 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.848 "dma_device_type": 2 00:11:24.848 } 00:11:24.848 ], 00:11:24.848 "driver_specific": { 00:11:24.848 "raid": { 00:11:24.848 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:24.848 "strip_size_kb": 0, 00:11:24.848 "state": "online", 00:11:24.848 "raid_level": "raid1", 00:11:24.848 "superblock": true, 00:11:24.848 "num_base_bdevs": 4, 00:11:24.848 "num_base_bdevs_discovered": 4, 00:11:24.848 "num_base_bdevs_operational": 4, 00:11:24.848 "base_bdevs_list": [ 00:11:24.848 { 00:11:24.848 "name": "pt1", 00:11:24.848 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:24.848 "is_configured": true, 00:11:24.848 "data_offset": 2048, 00:11:24.848 "data_size": 63488 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "name": "pt2", 00:11:24.848 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:24.848 "is_configured": true, 00:11:24.848 "data_offset": 2048, 00:11:24.848 "data_size": 63488 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "name": "pt3", 00:11:24.848 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:24.848 "is_configured": true, 00:11:24.848 "data_offset": 2048, 00:11:24.848 "data_size": 63488 00:11:24.848 }, 00:11:24.848 { 00:11:24.848 "name": "pt4", 00:11:24.848 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:24.848 "is_configured": true, 00:11:24.848 "data_offset": 2048, 00:11:24.848 "data_size": 63488 00:11:24.848 } 00:11:24.848 ] 00:11:24.848 } 00:11:24.848 } 00:11:24.848 }' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:24.848 pt2 00:11:24.848 pt3 00:11:24.848 pt4' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.848 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 [2024-11-16 18:52:08.340107] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f2452dd1-1dac-4bc4-a81c-c2314883a11a 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f2452dd1-1dac-4bc4-a81c-c2314883a11a ']' 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 [2024-11-16 18:52:08.383750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.109 [2024-11-16 18:52:08.383821] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.109 [2024-11-16 18:52:08.383919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.109 [2024-11-16 18:52:08.384031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.109 [2024-11-16 18:52:08.384081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.109 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.109 [2024-11-16 18:52:08.551462] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:25.110 [2024-11-16 18:52:08.553244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:25.110 [2024-11-16 18:52:08.553342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:25.110 [2024-11-16 18:52:08.553380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:25.110 [2024-11-16 18:52:08.553431] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:25.110 [2024-11-16 18:52:08.553478] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:25.110 [2024-11-16 18:52:08.553496] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:25.110 [2024-11-16 18:52:08.553514] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:25.110 [2024-11-16 18:52:08.553526] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:25.110 [2024-11-16 18:52:08.553536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:25.110 request: 00:11:25.110 { 00:11:25.110 "name": "raid_bdev1", 00:11:25.110 "raid_level": "raid1", 00:11:25.110 "base_bdevs": [ 00:11:25.110 "malloc1", 00:11:25.110 "malloc2", 00:11:25.110 "malloc3", 00:11:25.110 "malloc4" 00:11:25.110 ], 00:11:25.110 "superblock": false, 00:11:25.110 "method": "bdev_raid_create", 00:11:25.110 "req_id": 1 00:11:25.110 } 00:11:25.110 Got JSON-RPC error response 00:11:25.110 response: 00:11:25.110 { 00:11:25.110 "code": -17, 00:11:25.110 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:25.110 } 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.110 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.370 [2024-11-16 18:52:08.619330] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.370 [2024-11-16 18:52:08.619425] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.370 [2024-11-16 18:52:08.619456] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:25.370 [2024-11-16 18:52:08.619510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.370 [2024-11-16 18:52:08.621553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.370 [2024-11-16 18:52:08.621633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.370 [2024-11-16 18:52:08.621750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:25.370 [2024-11-16 18:52:08.621833] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.370 pt1 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.370 "name": "raid_bdev1", 00:11:25.370 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:25.370 "strip_size_kb": 0, 00:11:25.370 "state": "configuring", 00:11:25.370 "raid_level": "raid1", 00:11:25.370 "superblock": true, 00:11:25.370 "num_base_bdevs": 4, 00:11:25.370 "num_base_bdevs_discovered": 1, 00:11:25.370 "num_base_bdevs_operational": 4, 00:11:25.370 "base_bdevs_list": [ 00:11:25.370 { 00:11:25.370 "name": "pt1", 00:11:25.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.370 "is_configured": true, 00:11:25.370 "data_offset": 2048, 00:11:25.370 "data_size": 63488 00:11:25.370 }, 00:11:25.370 { 00:11:25.370 "name": null, 00:11:25.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.370 "is_configured": false, 00:11:25.370 "data_offset": 2048, 00:11:25.370 "data_size": 63488 00:11:25.370 }, 00:11:25.370 { 00:11:25.370 "name": null, 00:11:25.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.370 "is_configured": false, 00:11:25.370 "data_offset": 2048, 00:11:25.370 "data_size": 63488 00:11:25.370 }, 00:11:25.370 { 00:11:25.370 "name": null, 00:11:25.370 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.370 "is_configured": false, 00:11:25.370 "data_offset": 2048, 00:11:25.370 "data_size": 63488 00:11:25.370 } 00:11:25.370 ] 00:11:25.370 }' 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.370 18:52:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.631 [2024-11-16 18:52:09.026674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.631 [2024-11-16 18:52:09.026737] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.631 [2024-11-16 18:52:09.026755] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:25.631 [2024-11-16 18:52:09.026765] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.631 [2024-11-16 18:52:09.027168] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.631 [2024-11-16 18:52:09.027187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.631 [2024-11-16 18:52:09.027260] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:25.631 [2024-11-16 18:52:09.027288] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.631 pt2 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.631 [2024-11-16 18:52:09.038633] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.631 "name": "raid_bdev1", 00:11:25.631 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:25.631 "strip_size_kb": 0, 00:11:25.631 "state": "configuring", 00:11:25.631 "raid_level": "raid1", 00:11:25.631 "superblock": true, 00:11:25.631 "num_base_bdevs": 4, 00:11:25.631 "num_base_bdevs_discovered": 1, 00:11:25.631 "num_base_bdevs_operational": 4, 00:11:25.631 "base_bdevs_list": [ 00:11:25.631 { 00:11:25.631 "name": "pt1", 00:11:25.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.631 "is_configured": true, 00:11:25.631 "data_offset": 2048, 00:11:25.631 "data_size": 63488 00:11:25.631 }, 00:11:25.631 { 00:11:25.631 "name": null, 00:11:25.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.631 "is_configured": false, 00:11:25.631 "data_offset": 0, 00:11:25.631 "data_size": 63488 00:11:25.631 }, 00:11:25.631 { 00:11:25.631 "name": null, 00:11:25.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.631 "is_configured": false, 00:11:25.631 "data_offset": 2048, 00:11:25.631 "data_size": 63488 00:11:25.631 }, 00:11:25.631 { 00:11:25.631 "name": null, 00:11:25.631 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.631 "is_configured": false, 00:11:25.631 "data_offset": 2048, 00:11:25.631 "data_size": 63488 00:11:25.631 } 00:11:25.631 ] 00:11:25.631 }' 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.631 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.202 [2024-11-16 18:52:09.481879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:26.202 [2024-11-16 18:52:09.481989] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.202 [2024-11-16 18:52:09.482033] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:26.202 [2024-11-16 18:52:09.482064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.202 [2024-11-16 18:52:09.482508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.202 [2024-11-16 18:52:09.482564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:26.202 [2024-11-16 18:52:09.482686] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:26.202 [2024-11-16 18:52:09.482738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:26.202 pt2 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.202 [2024-11-16 18:52:09.493816] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:26.202 [2024-11-16 18:52:09.493896] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.202 [2024-11-16 18:52:09.493944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:26.202 [2024-11-16 18:52:09.493970] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.202 [2024-11-16 18:52:09.494322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.202 [2024-11-16 18:52:09.494373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:26.202 [2024-11-16 18:52:09.494459] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:26.202 [2024-11-16 18:52:09.494501] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:26.202 pt3 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.202 [2024-11-16 18:52:09.505777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:26.202 [2024-11-16 18:52:09.505818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.202 [2024-11-16 18:52:09.505832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:26.202 [2024-11-16 18:52:09.505839] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.202 [2024-11-16 18:52:09.506151] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.202 [2024-11-16 18:52:09.506165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:26.202 [2024-11-16 18:52:09.506215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:26.202 [2024-11-16 18:52:09.506230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:26.202 [2024-11-16 18:52:09.506364] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:26.202 [2024-11-16 18:52:09.506371] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:26.202 [2024-11-16 18:52:09.506585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:26.202 [2024-11-16 18:52:09.506781] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:26.202 [2024-11-16 18:52:09.506794] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:26.202 [2024-11-16 18:52:09.506935] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.202 pt4 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.202 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.202 "name": "raid_bdev1", 00:11:26.202 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:26.202 "strip_size_kb": 0, 00:11:26.202 "state": "online", 00:11:26.202 "raid_level": "raid1", 00:11:26.202 "superblock": true, 00:11:26.202 "num_base_bdevs": 4, 00:11:26.202 "num_base_bdevs_discovered": 4, 00:11:26.202 "num_base_bdevs_operational": 4, 00:11:26.202 "base_bdevs_list": [ 00:11:26.202 { 00:11:26.202 "name": "pt1", 00:11:26.202 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.202 "is_configured": true, 00:11:26.202 "data_offset": 2048, 00:11:26.202 "data_size": 63488 00:11:26.202 }, 00:11:26.202 { 00:11:26.202 "name": "pt2", 00:11:26.202 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.203 "is_configured": true, 00:11:26.203 "data_offset": 2048, 00:11:26.203 "data_size": 63488 00:11:26.203 }, 00:11:26.203 { 00:11:26.203 "name": "pt3", 00:11:26.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.203 "is_configured": true, 00:11:26.203 "data_offset": 2048, 00:11:26.203 "data_size": 63488 00:11:26.203 }, 00:11:26.203 { 00:11:26.203 "name": "pt4", 00:11:26.203 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.203 "is_configured": true, 00:11:26.203 "data_offset": 2048, 00:11:26.203 "data_size": 63488 00:11:26.203 } 00:11:26.203 ] 00:11:26.203 }' 00:11:26.203 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.203 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.463 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.463 [2024-11-16 18:52:09.917422] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.723 18:52:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.723 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.723 "name": "raid_bdev1", 00:11:26.723 "aliases": [ 00:11:26.723 "f2452dd1-1dac-4bc4-a81c-c2314883a11a" 00:11:26.723 ], 00:11:26.723 "product_name": "Raid Volume", 00:11:26.723 "block_size": 512, 00:11:26.723 "num_blocks": 63488, 00:11:26.723 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:26.723 "assigned_rate_limits": { 00:11:26.723 "rw_ios_per_sec": 0, 00:11:26.723 "rw_mbytes_per_sec": 0, 00:11:26.723 "r_mbytes_per_sec": 0, 00:11:26.723 "w_mbytes_per_sec": 0 00:11:26.723 }, 00:11:26.723 "claimed": false, 00:11:26.723 "zoned": false, 00:11:26.723 "supported_io_types": { 00:11:26.723 "read": true, 00:11:26.723 "write": true, 00:11:26.723 "unmap": false, 00:11:26.723 "flush": false, 00:11:26.723 "reset": true, 00:11:26.723 "nvme_admin": false, 00:11:26.723 "nvme_io": false, 00:11:26.723 "nvme_io_md": false, 00:11:26.723 "write_zeroes": true, 00:11:26.723 "zcopy": false, 00:11:26.723 "get_zone_info": false, 00:11:26.723 "zone_management": false, 00:11:26.723 "zone_append": false, 00:11:26.723 "compare": false, 00:11:26.723 "compare_and_write": false, 00:11:26.723 "abort": false, 00:11:26.723 "seek_hole": false, 00:11:26.723 "seek_data": false, 00:11:26.723 "copy": false, 00:11:26.723 "nvme_iov_md": false 00:11:26.723 }, 00:11:26.723 "memory_domains": [ 00:11:26.723 { 00:11:26.723 "dma_device_id": "system", 00:11:26.723 "dma_device_type": 1 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.723 "dma_device_type": 2 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "dma_device_id": "system", 00:11:26.723 "dma_device_type": 1 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.723 "dma_device_type": 2 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "dma_device_id": "system", 00:11:26.723 "dma_device_type": 1 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.723 "dma_device_type": 2 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "dma_device_id": "system", 00:11:26.723 "dma_device_type": 1 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.723 "dma_device_type": 2 00:11:26.723 } 00:11:26.723 ], 00:11:26.723 "driver_specific": { 00:11:26.723 "raid": { 00:11:26.723 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:26.723 "strip_size_kb": 0, 00:11:26.723 "state": "online", 00:11:26.723 "raid_level": "raid1", 00:11:26.723 "superblock": true, 00:11:26.723 "num_base_bdevs": 4, 00:11:26.723 "num_base_bdevs_discovered": 4, 00:11:26.723 "num_base_bdevs_operational": 4, 00:11:26.723 "base_bdevs_list": [ 00:11:26.723 { 00:11:26.723 "name": "pt1", 00:11:26.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.723 "is_configured": true, 00:11:26.723 "data_offset": 2048, 00:11:26.723 "data_size": 63488 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "name": "pt2", 00:11:26.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.723 "is_configured": true, 00:11:26.723 "data_offset": 2048, 00:11:26.723 "data_size": 63488 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "name": "pt3", 00:11:26.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.723 "is_configured": true, 00:11:26.723 "data_offset": 2048, 00:11:26.723 "data_size": 63488 00:11:26.723 }, 00:11:26.723 { 00:11:26.723 "name": "pt4", 00:11:26.723 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.723 "is_configured": true, 00:11:26.723 "data_offset": 2048, 00:11:26.723 "data_size": 63488 00:11:26.723 } 00:11:26.723 ] 00:11:26.723 } 00:11:26.723 } 00:11:26.723 }' 00:11:26.723 18:52:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:26.723 pt2 00:11:26.723 pt3 00:11:26.723 pt4' 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.723 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.983 [2024-11-16 18:52:10.264767] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f2452dd1-1dac-4bc4-a81c-c2314883a11a '!=' f2452dd1-1dac-4bc4-a81c-c2314883a11a ']' 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.983 [2024-11-16 18:52:10.300462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.983 "name": "raid_bdev1", 00:11:26.983 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:26.983 "strip_size_kb": 0, 00:11:26.983 "state": "online", 00:11:26.983 "raid_level": "raid1", 00:11:26.983 "superblock": true, 00:11:26.983 "num_base_bdevs": 4, 00:11:26.983 "num_base_bdevs_discovered": 3, 00:11:26.983 "num_base_bdevs_operational": 3, 00:11:26.983 "base_bdevs_list": [ 00:11:26.983 { 00:11:26.983 "name": null, 00:11:26.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.983 "is_configured": false, 00:11:26.983 "data_offset": 0, 00:11:26.983 "data_size": 63488 00:11:26.983 }, 00:11:26.983 { 00:11:26.983 "name": "pt2", 00:11:26.983 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.983 "is_configured": true, 00:11:26.983 "data_offset": 2048, 00:11:26.983 "data_size": 63488 00:11:26.983 }, 00:11:26.983 { 00:11:26.983 "name": "pt3", 00:11:26.983 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.983 "is_configured": true, 00:11:26.983 "data_offset": 2048, 00:11:26.983 "data_size": 63488 00:11:26.983 }, 00:11:26.983 { 00:11:26.983 "name": "pt4", 00:11:26.983 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.983 "is_configured": true, 00:11:26.983 "data_offset": 2048, 00:11:26.983 "data_size": 63488 00:11:26.983 } 00:11:26.983 ] 00:11:26.983 }' 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.983 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.553 [2024-11-16 18:52:10.739767] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:27.553 [2024-11-16 18:52:10.739865] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:27.553 [2024-11-16 18:52:10.739970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:27.553 [2024-11-16 18:52:10.740063] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:27.553 [2024-11-16 18:52:10.740112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.553 [2024-11-16 18:52:10.827582] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.553 [2024-11-16 18:52:10.827637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.553 [2024-11-16 18:52:10.827669] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:27.553 [2024-11-16 18:52:10.827678] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.553 [2024-11-16 18:52:10.829868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.553 [2024-11-16 18:52:10.829903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.553 [2024-11-16 18:52:10.829981] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:27.553 [2024-11-16 18:52:10.830042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.553 pt2 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.553 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.554 "name": "raid_bdev1", 00:11:27.554 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:27.554 "strip_size_kb": 0, 00:11:27.554 "state": "configuring", 00:11:27.554 "raid_level": "raid1", 00:11:27.554 "superblock": true, 00:11:27.554 "num_base_bdevs": 4, 00:11:27.554 "num_base_bdevs_discovered": 1, 00:11:27.554 "num_base_bdevs_operational": 3, 00:11:27.554 "base_bdevs_list": [ 00:11:27.554 { 00:11:27.554 "name": null, 00:11:27.554 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.554 "is_configured": false, 00:11:27.554 "data_offset": 2048, 00:11:27.554 "data_size": 63488 00:11:27.554 }, 00:11:27.554 { 00:11:27.554 "name": "pt2", 00:11:27.554 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.554 "is_configured": true, 00:11:27.554 "data_offset": 2048, 00:11:27.554 "data_size": 63488 00:11:27.554 }, 00:11:27.554 { 00:11:27.554 "name": null, 00:11:27.554 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.554 "is_configured": false, 00:11:27.554 "data_offset": 2048, 00:11:27.554 "data_size": 63488 00:11:27.554 }, 00:11:27.554 { 00:11:27.554 "name": null, 00:11:27.554 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.554 "is_configured": false, 00:11:27.554 "data_offset": 2048, 00:11:27.554 "data_size": 63488 00:11:27.554 } 00:11:27.554 ] 00:11:27.554 }' 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.554 18:52:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.873 [2024-11-16 18:52:11.206948] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:27.873 [2024-11-16 18:52:11.207048] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.873 [2024-11-16 18:52:11.207093] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:11:27.873 [2024-11-16 18:52:11.207122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.873 [2024-11-16 18:52:11.207565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.873 [2024-11-16 18:52:11.207620] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:27.873 [2024-11-16 18:52:11.207734] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:27.873 [2024-11-16 18:52:11.207783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:27.873 pt3 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.873 "name": "raid_bdev1", 00:11:27.873 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:27.873 "strip_size_kb": 0, 00:11:27.873 "state": "configuring", 00:11:27.873 "raid_level": "raid1", 00:11:27.873 "superblock": true, 00:11:27.873 "num_base_bdevs": 4, 00:11:27.873 "num_base_bdevs_discovered": 2, 00:11:27.873 "num_base_bdevs_operational": 3, 00:11:27.873 "base_bdevs_list": [ 00:11:27.873 { 00:11:27.873 "name": null, 00:11:27.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.873 "is_configured": false, 00:11:27.873 "data_offset": 2048, 00:11:27.873 "data_size": 63488 00:11:27.873 }, 00:11:27.873 { 00:11:27.873 "name": "pt2", 00:11:27.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.873 "is_configured": true, 00:11:27.873 "data_offset": 2048, 00:11:27.873 "data_size": 63488 00:11:27.873 }, 00:11:27.873 { 00:11:27.873 "name": "pt3", 00:11:27.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.873 "is_configured": true, 00:11:27.873 "data_offset": 2048, 00:11:27.873 "data_size": 63488 00:11:27.873 }, 00:11:27.873 { 00:11:27.873 "name": null, 00:11:27.873 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.873 "is_configured": false, 00:11:27.873 "data_offset": 2048, 00:11:27.873 "data_size": 63488 00:11:27.873 } 00:11:27.873 ] 00:11:27.873 }' 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.873 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.456 [2024-11-16 18:52:11.630262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:28.456 [2024-11-16 18:52:11.630333] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.456 [2024-11-16 18:52:11.630357] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:11:28.456 [2024-11-16 18:52:11.630367] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.456 [2024-11-16 18:52:11.630828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.456 [2024-11-16 18:52:11.630847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:28.456 [2024-11-16 18:52:11.630933] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:28.456 [2024-11-16 18:52:11.630963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:28.456 [2024-11-16 18:52:11.631118] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:28.456 [2024-11-16 18:52:11.631127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.456 [2024-11-16 18:52:11.631375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:28.456 [2024-11-16 18:52:11.631532] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:28.456 [2024-11-16 18:52:11.631545] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:28.456 [2024-11-16 18:52:11.631707] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.456 pt4 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.456 "name": "raid_bdev1", 00:11:28.456 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:28.456 "strip_size_kb": 0, 00:11:28.456 "state": "online", 00:11:28.456 "raid_level": "raid1", 00:11:28.456 "superblock": true, 00:11:28.456 "num_base_bdevs": 4, 00:11:28.456 "num_base_bdevs_discovered": 3, 00:11:28.456 "num_base_bdevs_operational": 3, 00:11:28.456 "base_bdevs_list": [ 00:11:28.456 { 00:11:28.456 "name": null, 00:11:28.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.456 "is_configured": false, 00:11:28.456 "data_offset": 2048, 00:11:28.456 "data_size": 63488 00:11:28.456 }, 00:11:28.456 { 00:11:28.456 "name": "pt2", 00:11:28.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.456 "is_configured": true, 00:11:28.456 "data_offset": 2048, 00:11:28.456 "data_size": 63488 00:11:28.456 }, 00:11:28.456 { 00:11:28.456 "name": "pt3", 00:11:28.456 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.456 "is_configured": true, 00:11:28.456 "data_offset": 2048, 00:11:28.456 "data_size": 63488 00:11:28.456 }, 00:11:28.456 { 00:11:28.456 "name": "pt4", 00:11:28.456 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.456 "is_configured": true, 00:11:28.456 "data_offset": 2048, 00:11:28.456 "data_size": 63488 00:11:28.456 } 00:11:28.456 ] 00:11:28.456 }' 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.456 18:52:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.716 [2024-11-16 18:52:12.033517] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.716 [2024-11-16 18:52:12.033548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:28.716 [2024-11-16 18:52:12.033634] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.716 [2024-11-16 18:52:12.033751] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.716 [2024-11-16 18:52:12.033767] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.716 [2024-11-16 18:52:12.105387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:28.716 [2024-11-16 18:52:12.105505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.716 [2024-11-16 18:52:12.105529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:11:28.716 [2024-11-16 18:52:12.105540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.716 [2024-11-16 18:52:12.107800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.716 [2024-11-16 18:52:12.107849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:28.716 [2024-11-16 18:52:12.107939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:28.716 [2024-11-16 18:52:12.107989] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:28.716 [2024-11-16 18:52:12.108125] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:28.716 [2024-11-16 18:52:12.108139] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:28.716 [2024-11-16 18:52:12.108154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:28.716 [2024-11-16 18:52:12.108217] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:28.716 [2024-11-16 18:52:12.108329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:28.716 pt1 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.716 "name": "raid_bdev1", 00:11:28.716 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:28.716 "strip_size_kb": 0, 00:11:28.716 "state": "configuring", 00:11:28.716 "raid_level": "raid1", 00:11:28.716 "superblock": true, 00:11:28.716 "num_base_bdevs": 4, 00:11:28.716 "num_base_bdevs_discovered": 2, 00:11:28.716 "num_base_bdevs_operational": 3, 00:11:28.716 "base_bdevs_list": [ 00:11:28.716 { 00:11:28.716 "name": null, 00:11:28.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.716 "is_configured": false, 00:11:28.716 "data_offset": 2048, 00:11:28.716 "data_size": 63488 00:11:28.716 }, 00:11:28.716 { 00:11:28.716 "name": "pt2", 00:11:28.716 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.716 "is_configured": true, 00:11:28.716 "data_offset": 2048, 00:11:28.716 "data_size": 63488 00:11:28.716 }, 00:11:28.716 { 00:11:28.716 "name": "pt3", 00:11:28.716 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.716 "is_configured": true, 00:11:28.716 "data_offset": 2048, 00:11:28.716 "data_size": 63488 00:11:28.716 }, 00:11:28.716 { 00:11:28.716 "name": null, 00:11:28.716 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.716 "is_configured": false, 00:11:28.716 "data_offset": 2048, 00:11:28.716 "data_size": 63488 00:11:28.716 } 00:11:28.716 ] 00:11:28.716 }' 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.716 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.285 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:29.285 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:29.285 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.285 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.285 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.285 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 [2024-11-16 18:52:12.548679] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:29.286 [2024-11-16 18:52:12.548785] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.286 [2024-11-16 18:52:12.548827] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:11:29.286 [2024-11-16 18:52:12.548856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.286 [2024-11-16 18:52:12.549331] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.286 [2024-11-16 18:52:12.549390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:29.286 [2024-11-16 18:52:12.549505] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:29.286 [2024-11-16 18:52:12.549567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:29.286 [2024-11-16 18:52:12.549761] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:29.286 [2024-11-16 18:52:12.549808] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:29.286 [2024-11-16 18:52:12.550089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:11:29.286 [2024-11-16 18:52:12.550275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:29.286 [2024-11-16 18:52:12.550319] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:29.286 [2024-11-16 18:52:12.550499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.286 pt4 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.286 "name": "raid_bdev1", 00:11:29.286 "uuid": "f2452dd1-1dac-4bc4-a81c-c2314883a11a", 00:11:29.286 "strip_size_kb": 0, 00:11:29.286 "state": "online", 00:11:29.286 "raid_level": "raid1", 00:11:29.286 "superblock": true, 00:11:29.286 "num_base_bdevs": 4, 00:11:29.286 "num_base_bdevs_discovered": 3, 00:11:29.286 "num_base_bdevs_operational": 3, 00:11:29.286 "base_bdevs_list": [ 00:11:29.286 { 00:11:29.286 "name": null, 00:11:29.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.286 "is_configured": false, 00:11:29.286 "data_offset": 2048, 00:11:29.286 "data_size": 63488 00:11:29.286 }, 00:11:29.286 { 00:11:29.286 "name": "pt2", 00:11:29.286 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:29.286 "is_configured": true, 00:11:29.286 "data_offset": 2048, 00:11:29.286 "data_size": 63488 00:11:29.286 }, 00:11:29.286 { 00:11:29.286 "name": "pt3", 00:11:29.286 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:29.286 "is_configured": true, 00:11:29.286 "data_offset": 2048, 00:11:29.286 "data_size": 63488 00:11:29.286 }, 00:11:29.286 { 00:11:29.286 "name": "pt4", 00:11:29.286 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:29.286 "is_configured": true, 00:11:29.286 "data_offset": 2048, 00:11:29.286 "data_size": 63488 00:11:29.286 } 00:11:29.286 ] 00:11:29.286 }' 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.286 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:29.546 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.546 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.546 18:52:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:29.546 18:52:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.546 18:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:29.805 [2024-11-16 18:52:13.024143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f2452dd1-1dac-4bc4-a81c-c2314883a11a '!=' f2452dd1-1dac-4bc4-a81c-c2314883a11a ']' 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74283 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74283 ']' 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74283 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74283 00:11:29.805 killing process with pid 74283 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74283' 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74283 00:11:29.805 [2024-11-16 18:52:13.100320] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:29.805 [2024-11-16 18:52:13.100420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.805 18:52:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74283 00:11:29.805 [2024-11-16 18:52:13.100498] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.805 [2024-11-16 18:52:13.100510] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:30.065 [2024-11-16 18:52:13.481886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:31.446 18:52:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:31.446 00:11:31.446 real 0m8.154s 00:11:31.446 user 0m12.817s 00:11:31.446 sys 0m1.429s 00:11:31.446 18:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.446 18:52:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.446 ************************************ 00:11:31.447 END TEST raid_superblock_test 00:11:31.447 ************************************ 00:11:31.447 18:52:14 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:11:31.447 18:52:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:31.447 18:52:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.447 18:52:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:31.447 ************************************ 00:11:31.447 START TEST raid_read_error_test 00:11:31.447 ************************************ 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QlrLKGBN7k 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74772 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74772 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74772 ']' 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.447 18:52:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.447 [2024-11-16 18:52:14.707432] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:31.447 [2024-11-16 18:52:14.707658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74772 ] 00:11:31.447 [2024-11-16 18:52:14.882523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:31.706 [2024-11-16 18:52:14.995907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.966 [2024-11-16 18:52:15.183547] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.966 [2024-11-16 18:52:15.183581] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.226 BaseBdev1_malloc 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.226 true 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.226 [2024-11-16 18:52:15.577192] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:32.226 [2024-11-16 18:52:15.577245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.226 [2024-11-16 18:52:15.577280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:32.226 [2024-11-16 18:52:15.577291] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.226 [2024-11-16 18:52:15.579323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.226 [2024-11-16 18:52:15.579436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:32.226 BaseBdev1 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.226 BaseBdev2_malloc 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.226 true 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.226 [2024-11-16 18:52:15.644427] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:32.226 [2024-11-16 18:52:15.644483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.226 [2024-11-16 18:52:15.644498] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:32.226 [2024-11-16 18:52:15.644508] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.226 [2024-11-16 18:52:15.646516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.226 [2024-11-16 18:52:15.646558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:32.226 BaseBdev2 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.226 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.487 BaseBdev3_malloc 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.487 true 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.487 [2024-11-16 18:52:15.725014] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:32.487 [2024-11-16 18:52:15.725079] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.487 [2024-11-16 18:52:15.725097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:32.487 [2024-11-16 18:52:15.725107] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.487 [2024-11-16 18:52:15.727206] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.487 [2024-11-16 18:52:15.727283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:32.487 BaseBdev3 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.487 BaseBdev4_malloc 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.487 true 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.487 [2024-11-16 18:52:15.790950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:32.487 [2024-11-16 18:52:15.791064] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:32.487 [2024-11-16 18:52:15.791103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:32.487 [2024-11-16 18:52:15.791113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:32.487 [2024-11-16 18:52:15.793225] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:32.487 [2024-11-16 18:52:15.793268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:32.487 BaseBdev4 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.487 [2024-11-16 18:52:15.802983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:32.487 [2024-11-16 18:52:15.804814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:32.487 [2024-11-16 18:52:15.804888] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.487 [2024-11-16 18:52:15.804953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:32.487 [2024-11-16 18:52:15.805177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:32.487 [2024-11-16 18:52:15.805190] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:32.487 [2024-11-16 18:52:15.805435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:32.487 [2024-11-16 18:52:15.805585] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:32.487 [2024-11-16 18:52:15.805593] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:32.487 [2024-11-16 18:52:15.805764] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.487 "name": "raid_bdev1", 00:11:32.487 "uuid": "37c3fd14-17c5-4eea-95d3-a0c78c29eb41", 00:11:32.487 "strip_size_kb": 0, 00:11:32.487 "state": "online", 00:11:32.487 "raid_level": "raid1", 00:11:32.487 "superblock": true, 00:11:32.487 "num_base_bdevs": 4, 00:11:32.487 "num_base_bdevs_discovered": 4, 00:11:32.487 "num_base_bdevs_operational": 4, 00:11:32.487 "base_bdevs_list": [ 00:11:32.487 { 00:11:32.487 "name": "BaseBdev1", 00:11:32.487 "uuid": "93117916-05c9-5c90-bacb-daa3a9b76558", 00:11:32.487 "is_configured": true, 00:11:32.487 "data_offset": 2048, 00:11:32.487 "data_size": 63488 00:11:32.487 }, 00:11:32.487 { 00:11:32.487 "name": "BaseBdev2", 00:11:32.487 "uuid": "efce0685-3084-5c2d-8a8e-545a2b3fd782", 00:11:32.487 "is_configured": true, 00:11:32.487 "data_offset": 2048, 00:11:32.487 "data_size": 63488 00:11:32.487 }, 00:11:32.487 { 00:11:32.487 "name": "BaseBdev3", 00:11:32.487 "uuid": "8b33c2cd-898c-56c3-9e6e-caa441fb6b5a", 00:11:32.487 "is_configured": true, 00:11:32.487 "data_offset": 2048, 00:11:32.487 "data_size": 63488 00:11:32.487 }, 00:11:32.487 { 00:11:32.487 "name": "BaseBdev4", 00:11:32.487 "uuid": "a5c72dcc-adfb-54f9-81d7-7dfd7afa9798", 00:11:32.487 "is_configured": true, 00:11:32.487 "data_offset": 2048, 00:11:32.487 "data_size": 63488 00:11:32.487 } 00:11:32.487 ] 00:11:32.487 }' 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.487 18:52:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.747 18:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:32.747 18:52:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:33.007 [2024-11-16 18:52:16.311427] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.946 "name": "raid_bdev1", 00:11:33.946 "uuid": "37c3fd14-17c5-4eea-95d3-a0c78c29eb41", 00:11:33.946 "strip_size_kb": 0, 00:11:33.946 "state": "online", 00:11:33.946 "raid_level": "raid1", 00:11:33.946 "superblock": true, 00:11:33.946 "num_base_bdevs": 4, 00:11:33.946 "num_base_bdevs_discovered": 4, 00:11:33.946 "num_base_bdevs_operational": 4, 00:11:33.946 "base_bdevs_list": [ 00:11:33.946 { 00:11:33.946 "name": "BaseBdev1", 00:11:33.946 "uuid": "93117916-05c9-5c90-bacb-daa3a9b76558", 00:11:33.946 "is_configured": true, 00:11:33.946 "data_offset": 2048, 00:11:33.946 "data_size": 63488 00:11:33.946 }, 00:11:33.946 { 00:11:33.946 "name": "BaseBdev2", 00:11:33.946 "uuid": "efce0685-3084-5c2d-8a8e-545a2b3fd782", 00:11:33.946 "is_configured": true, 00:11:33.946 "data_offset": 2048, 00:11:33.946 "data_size": 63488 00:11:33.946 }, 00:11:33.946 { 00:11:33.946 "name": "BaseBdev3", 00:11:33.946 "uuid": "8b33c2cd-898c-56c3-9e6e-caa441fb6b5a", 00:11:33.946 "is_configured": true, 00:11:33.946 "data_offset": 2048, 00:11:33.946 "data_size": 63488 00:11:33.946 }, 00:11:33.946 { 00:11:33.946 "name": "BaseBdev4", 00:11:33.946 "uuid": "a5c72dcc-adfb-54f9-81d7-7dfd7afa9798", 00:11:33.946 "is_configured": true, 00:11:33.946 "data_offset": 2048, 00:11:33.946 "data_size": 63488 00:11:33.946 } 00:11:33.946 ] 00:11:33.946 }' 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.946 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.205 [2024-11-16 18:52:17.634483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.205 [2024-11-16 18:52:17.634576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.205 [2024-11-16 18:52:17.637135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.205 [2024-11-16 18:52:17.637252] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.205 [2024-11-16 18:52:17.637389] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.205 [2024-11-16 18:52:17.637436] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74772 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74772 ']' 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74772 00:11:34.205 { 00:11:34.205 "results": [ 00:11:34.205 { 00:11:34.205 "job": "raid_bdev1", 00:11:34.205 "core_mask": "0x1", 00:11:34.205 "workload": "randrw", 00:11:34.205 "percentage": 50, 00:11:34.205 "status": "finished", 00:11:34.205 "queue_depth": 1, 00:11:34.205 "io_size": 131072, 00:11:34.205 "runtime": 1.323886, 00:11:34.205 "iops": 11086.30199276977, 00:11:34.205 "mibps": 1385.7877490962212, 00:11:34.205 "io_failed": 0, 00:11:34.205 "io_timeout": 0, 00:11:34.205 "avg_latency_us": 87.70041638984205, 00:11:34.205 "min_latency_us": 22.134497816593885, 00:11:34.205 "max_latency_us": 1566.8541484716156 00:11:34.205 } 00:11:34.205 ], 00:11:34.205 "core_count": 1 00:11:34.205 } 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:34.205 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74772 00:11:34.472 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:34.472 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:34.472 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74772' 00:11:34.472 killing process with pid 74772 00:11:34.472 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74772 00:11:34.472 [2024-11-16 18:52:17.679374] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.472 18:52:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74772 00:11:34.740 [2024-11-16 18:52:17.991302] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QlrLKGBN7k 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:35.680 ************************************ 00:11:35.680 END TEST raid_read_error_test 00:11:35.680 ************************************ 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:35.680 00:11:35.680 real 0m4.529s 00:11:35.680 user 0m5.332s 00:11:35.680 sys 0m0.571s 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:35.680 18:52:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.939 18:52:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:11:35.939 18:52:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:35.939 18:52:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:35.939 18:52:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.939 ************************************ 00:11:35.939 START TEST raid_write_error_test 00:11:35.939 ************************************ 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:35.939 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.qj6aqyNZfe 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74912 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74912 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74912 ']' 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:35.940 18:52:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.940 [2024-11-16 18:52:19.300134] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:35.940 [2024-11-16 18:52:19.300354] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74912 ] 00:11:36.199 [2024-11-16 18:52:19.475276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.199 [2024-11-16 18:52:19.594450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:36.473 [2024-11-16 18:52:19.785514] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.473 [2024-11-16 18:52:19.785572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.732 BaseBdev1_malloc 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.732 true 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.732 [2024-11-16 18:52:20.174510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.732 [2024-11-16 18:52:20.174565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.732 [2024-11-16 18:52:20.174600] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.732 [2024-11-16 18:52:20.174610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.732 [2024-11-16 18:52:20.176621] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.732 [2024-11-16 18:52:20.176754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.732 BaseBdev1 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.732 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.991 BaseBdev2_malloc 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.991 true 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.991 [2024-11-16 18:52:20.238249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.991 [2024-11-16 18:52:20.238303] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.991 [2024-11-16 18:52:20.238335] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.991 [2024-11-16 18:52:20.238345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.991 [2024-11-16 18:52:20.240395] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.991 [2024-11-16 18:52:20.240439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.991 BaseBdev2 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.991 BaseBdev3_malloc 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.991 true 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.991 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.991 [2024-11-16 18:52:20.316530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:36.991 [2024-11-16 18:52:20.316595] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.991 [2024-11-16 18:52:20.316611] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:36.991 [2024-11-16 18:52:20.316621] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.991 [2024-11-16 18:52:20.318607] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.992 [2024-11-16 18:52:20.318641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.992 BaseBdev3 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.992 BaseBdev4_malloc 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.992 true 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.992 [2024-11-16 18:52:20.381944] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:36.992 [2024-11-16 18:52:20.381997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.992 [2024-11-16 18:52:20.382014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:36.992 [2024-11-16 18:52:20.382024] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.992 [2024-11-16 18:52:20.384119] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.992 [2024-11-16 18:52:20.384159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:36.992 BaseBdev4 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.992 [2024-11-16 18:52:20.393977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.992 [2024-11-16 18:52:20.395728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.992 [2024-11-16 18:52:20.395803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.992 [2024-11-16 18:52:20.395872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.992 [2024-11-16 18:52:20.396083] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:36.992 [2024-11-16 18:52:20.396105] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:36.992 [2024-11-16 18:52:20.396328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:36.992 [2024-11-16 18:52:20.396497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:36.992 [2024-11-16 18:52:20.396513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:36.992 [2024-11-16 18:52:20.396664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.992 "name": "raid_bdev1", 00:11:36.992 "uuid": "65d5262f-5500-4974-96fb-ec95dd24f4aa", 00:11:36.992 "strip_size_kb": 0, 00:11:36.992 "state": "online", 00:11:36.992 "raid_level": "raid1", 00:11:36.992 "superblock": true, 00:11:36.992 "num_base_bdevs": 4, 00:11:36.992 "num_base_bdevs_discovered": 4, 00:11:36.992 "num_base_bdevs_operational": 4, 00:11:36.992 "base_bdevs_list": [ 00:11:36.992 { 00:11:36.992 "name": "BaseBdev1", 00:11:36.992 "uuid": "628a7875-fee6-54ec-b2f1-13f36b2dd3cb", 00:11:36.992 "is_configured": true, 00:11:36.992 "data_offset": 2048, 00:11:36.992 "data_size": 63488 00:11:36.992 }, 00:11:36.992 { 00:11:36.992 "name": "BaseBdev2", 00:11:36.992 "uuid": "305e22bd-0b4b-566c-b38d-9722ccae4b12", 00:11:36.992 "is_configured": true, 00:11:36.992 "data_offset": 2048, 00:11:36.992 "data_size": 63488 00:11:36.992 }, 00:11:36.992 { 00:11:36.992 "name": "BaseBdev3", 00:11:36.992 "uuid": "f65eaffd-57e9-5547-8b5c-742047ac96c9", 00:11:36.992 "is_configured": true, 00:11:36.992 "data_offset": 2048, 00:11:36.992 "data_size": 63488 00:11:36.992 }, 00:11:36.992 { 00:11:36.992 "name": "BaseBdev4", 00:11:36.992 "uuid": "df82edf6-2627-5c7f-a76f-37bf58a5b9ce", 00:11:36.992 "is_configured": true, 00:11:36.992 "data_offset": 2048, 00:11:36.992 "data_size": 63488 00:11:36.992 } 00:11:36.992 ] 00:11:36.992 }' 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.992 18:52:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.559 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:37.559 18:52:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.559 [2024-11-16 18:52:20.934460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.496 [2024-11-16 18:52:21.849089] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:38.496 [2024-11-16 18:52:21.849149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:38.496 [2024-11-16 18:52:21.849380] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.496 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.496 "name": "raid_bdev1", 00:11:38.496 "uuid": "65d5262f-5500-4974-96fb-ec95dd24f4aa", 00:11:38.496 "strip_size_kb": 0, 00:11:38.496 "state": "online", 00:11:38.496 "raid_level": "raid1", 00:11:38.496 "superblock": true, 00:11:38.496 "num_base_bdevs": 4, 00:11:38.496 "num_base_bdevs_discovered": 3, 00:11:38.496 "num_base_bdevs_operational": 3, 00:11:38.496 "base_bdevs_list": [ 00:11:38.496 { 00:11:38.496 "name": null, 00:11:38.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.496 "is_configured": false, 00:11:38.496 "data_offset": 0, 00:11:38.496 "data_size": 63488 00:11:38.496 }, 00:11:38.496 { 00:11:38.496 "name": "BaseBdev2", 00:11:38.496 "uuid": "305e22bd-0b4b-566c-b38d-9722ccae4b12", 00:11:38.496 "is_configured": true, 00:11:38.496 "data_offset": 2048, 00:11:38.497 "data_size": 63488 00:11:38.497 }, 00:11:38.497 { 00:11:38.497 "name": "BaseBdev3", 00:11:38.497 "uuid": "f65eaffd-57e9-5547-8b5c-742047ac96c9", 00:11:38.497 "is_configured": true, 00:11:38.497 "data_offset": 2048, 00:11:38.497 "data_size": 63488 00:11:38.497 }, 00:11:38.497 { 00:11:38.497 "name": "BaseBdev4", 00:11:38.497 "uuid": "df82edf6-2627-5c7f-a76f-37bf58a5b9ce", 00:11:38.497 "is_configured": true, 00:11:38.497 "data_offset": 2048, 00:11:38.497 "data_size": 63488 00:11:38.497 } 00:11:38.497 ] 00:11:38.497 }' 00:11:38.497 18:52:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.497 18:52:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.066 [2024-11-16 18:52:22.264083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:39.066 [2024-11-16 18:52:22.264121] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:39.066 [2024-11-16 18:52:22.266626] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:39.066 [2024-11-16 18:52:22.266685] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.066 [2024-11-16 18:52:22.266786] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:39.066 [2024-11-16 18:52:22.266804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:39.066 { 00:11:39.066 "results": [ 00:11:39.066 { 00:11:39.066 "job": "raid_bdev1", 00:11:39.066 "core_mask": "0x1", 00:11:39.066 "workload": "randrw", 00:11:39.066 "percentage": 50, 00:11:39.066 "status": "finished", 00:11:39.066 "queue_depth": 1, 00:11:39.066 "io_size": 131072, 00:11:39.066 "runtime": 1.330276, 00:11:39.066 "iops": 11902.793104588822, 00:11:39.066 "mibps": 1487.8491380736027, 00:11:39.066 "io_failed": 0, 00:11:39.066 "io_timeout": 0, 00:11:39.066 "avg_latency_us": 81.4973886826921, 00:11:39.066 "min_latency_us": 23.02882096069869, 00:11:39.066 "max_latency_us": 1438.071615720524 00:11:39.066 } 00:11:39.066 ], 00:11:39.066 "core_count": 1 00:11:39.066 } 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74912 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74912 ']' 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74912 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:39.066 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74912 00:11:39.066 killing process with pid 74912 00:11:39.067 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:39.067 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:39.067 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74912' 00:11:39.067 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74912 00:11:39.067 [2024-11-16 18:52:22.310446] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:39.067 18:52:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74912 00:11:39.326 [2024-11-16 18:52:22.627096] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.qj6aqyNZfe 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:40.706 00:11:40.706 real 0m4.575s 00:11:40.706 user 0m5.354s 00:11:40.706 sys 0m0.558s 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:40.706 ************************************ 00:11:40.706 END TEST raid_write_error_test 00:11:40.706 ************************************ 00:11:40.706 18:52:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.707 18:52:23 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:11:40.707 18:52:23 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:11:40.707 18:52:23 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:11:40.707 18:52:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:40.707 18:52:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:40.707 18:52:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.707 ************************************ 00:11:40.707 START TEST raid_rebuild_test 00:11:40.707 ************************************ 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75057 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75057 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75057 ']' 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.707 18:52:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.707 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:40.707 Zero copy mechanism will not be used. 00:11:40.707 [2024-11-16 18:52:23.925752] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:40.707 [2024-11-16 18:52:23.925873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75057 ] 00:11:40.707 [2024-11-16 18:52:24.099637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:40.967 [2024-11-16 18:52:24.209675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.967 [2024-11-16 18:52:24.398391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:40.967 [2024-11-16 18:52:24.398433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 BaseBdev1_malloc 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 [2024-11-16 18:52:24.799148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:41.536 [2024-11-16 18:52:24.799215] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.536 [2024-11-16 18:52:24.799238] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:41.536 [2024-11-16 18:52:24.799249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.536 [2024-11-16 18:52:24.801343] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.536 [2024-11-16 18:52:24.801384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:41.536 BaseBdev1 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 BaseBdev2_malloc 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 [2024-11-16 18:52:24.850854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:41.536 [2024-11-16 18:52:24.850920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.536 [2024-11-16 18:52:24.850940] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:41.536 [2024-11-16 18:52:24.850951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.536 [2024-11-16 18:52:24.853116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.536 [2024-11-16 18:52:24.853157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:41.536 BaseBdev2 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 spare_malloc 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 spare_delay 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 [2024-11-16 18:52:24.929482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:41.536 [2024-11-16 18:52:24.929558] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.536 [2024-11-16 18:52:24.929577] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:41.536 [2024-11-16 18:52:24.929588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.536 [2024-11-16 18:52:24.931614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.536 [2024-11-16 18:52:24.931663] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:41.536 spare 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 [2024-11-16 18:52:24.941507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:41.536 [2024-11-16 18:52:24.943299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:41.536 [2024-11-16 18:52:24.943385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:41.536 [2024-11-16 18:52:24.943398] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:41.536 [2024-11-16 18:52:24.943635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:41.536 [2024-11-16 18:52:24.943844] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:41.536 [2024-11-16 18:52:24.943859] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:41.536 [2024-11-16 18:52:24.944002] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.536 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.536 "name": "raid_bdev1", 00:11:41.536 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:41.537 "strip_size_kb": 0, 00:11:41.537 "state": "online", 00:11:41.537 "raid_level": "raid1", 00:11:41.537 "superblock": false, 00:11:41.537 "num_base_bdevs": 2, 00:11:41.537 "num_base_bdevs_discovered": 2, 00:11:41.537 "num_base_bdevs_operational": 2, 00:11:41.537 "base_bdevs_list": [ 00:11:41.537 { 00:11:41.537 "name": "BaseBdev1", 00:11:41.537 "uuid": "8ba4067f-0ef6-5bed-a125-2f84e17d4c2c", 00:11:41.537 "is_configured": true, 00:11:41.537 "data_offset": 0, 00:11:41.537 "data_size": 65536 00:11:41.537 }, 00:11:41.537 { 00:11:41.537 "name": "BaseBdev2", 00:11:41.537 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:41.537 "is_configured": true, 00:11:41.537 "data_offset": 0, 00:11:41.537 "data_size": 65536 00:11:41.537 } 00:11:41.537 ] 00:11:41.537 }' 00:11:41.537 18:52:24 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.537 18:52:24 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.125 [2024-11-16 18:52:25.337110] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.125 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:42.125 [2024-11-16 18:52:25.584447] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:42.385 /dev/nbd0 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:42.385 1+0 records in 00:11:42.385 1+0 records out 00:11:42.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390379 s, 10.5 MB/s 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:42.385 18:52:25 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:11:46.584 65536+0 records in 00:11:46.584 65536+0 records out 00:11:46.584 33554432 bytes (34 MB, 32 MiB) copied, 4.28714 s, 7.8 MB/s 00:11:46.584 18:52:29 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:11:46.584 18:52:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:46.584 18:52:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:11:46.584 18:52:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:46.584 18:52:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:46.584 18:52:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:46.584 18:52:29 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:46.844 [2024-11-16 18:52:30.142382] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.844 [2024-11-16 18:52:30.178405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.844 "name": "raid_bdev1", 00:11:46.844 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:46.844 "strip_size_kb": 0, 00:11:46.844 "state": "online", 00:11:46.844 "raid_level": "raid1", 00:11:46.844 "superblock": false, 00:11:46.844 "num_base_bdevs": 2, 00:11:46.844 "num_base_bdevs_discovered": 1, 00:11:46.844 "num_base_bdevs_operational": 1, 00:11:46.844 "base_bdevs_list": [ 00:11:46.844 { 00:11:46.844 "name": null, 00:11:46.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.844 "is_configured": false, 00:11:46.844 "data_offset": 0, 00:11:46.844 "data_size": 65536 00:11:46.844 }, 00:11:46.844 { 00:11:46.844 "name": "BaseBdev2", 00:11:46.844 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:46.844 "is_configured": true, 00:11:46.844 "data_offset": 0, 00:11:46.844 "data_size": 65536 00:11:46.844 } 00:11:46.844 ] 00:11:46.844 }' 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.844 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:47.414 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.414 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.414 [2024-11-16 18:52:30.593748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:47.414 [2024-11-16 18:52:30.609956] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:11:47.414 18:52:30 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.414 18:52:30 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:11:47.414 [2024-11-16 18:52:30.611803] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:48.354 "name": "raid_bdev1", 00:11:48.354 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:48.354 "strip_size_kb": 0, 00:11:48.354 "state": "online", 00:11:48.354 "raid_level": "raid1", 00:11:48.354 "superblock": false, 00:11:48.354 "num_base_bdevs": 2, 00:11:48.354 "num_base_bdevs_discovered": 2, 00:11:48.354 "num_base_bdevs_operational": 2, 00:11:48.354 "process": { 00:11:48.354 "type": "rebuild", 00:11:48.354 "target": "spare", 00:11:48.354 "progress": { 00:11:48.354 "blocks": 20480, 00:11:48.354 "percent": 31 00:11:48.354 } 00:11:48.354 }, 00:11:48.354 "base_bdevs_list": [ 00:11:48.354 { 00:11:48.354 "name": "spare", 00:11:48.354 "uuid": "3fe9c309-bb02-582d-abd1-824048d96991", 00:11:48.354 "is_configured": true, 00:11:48.354 "data_offset": 0, 00:11:48.354 "data_size": 65536 00:11:48.354 }, 00:11:48.354 { 00:11:48.354 "name": "BaseBdev2", 00:11:48.354 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:48.354 "is_configured": true, 00:11:48.354 "data_offset": 0, 00:11:48.354 "data_size": 65536 00:11:48.354 } 00:11:48.354 ] 00:11:48.354 }' 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.354 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.354 [2024-11-16 18:52:31.735207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.354 [2024-11-16 18:52:31.816756] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:11:48.354 [2024-11-16 18:52:31.816838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.354 [2024-11-16 18:52:31.816852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:11:48.354 [2024-11-16 18:52:31.816863] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.614 "name": "raid_bdev1", 00:11:48.614 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:48.614 "strip_size_kb": 0, 00:11:48.614 "state": "online", 00:11:48.614 "raid_level": "raid1", 00:11:48.614 "superblock": false, 00:11:48.614 "num_base_bdevs": 2, 00:11:48.614 "num_base_bdevs_discovered": 1, 00:11:48.614 "num_base_bdevs_operational": 1, 00:11:48.614 "base_bdevs_list": [ 00:11:48.614 { 00:11:48.614 "name": null, 00:11:48.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.614 "is_configured": false, 00:11:48.614 "data_offset": 0, 00:11:48.614 "data_size": 65536 00:11:48.614 }, 00:11:48.614 { 00:11:48.614 "name": "BaseBdev2", 00:11:48.614 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:48.614 "is_configured": true, 00:11:48.614 "data_offset": 0, 00:11:48.614 "data_size": 65536 00:11:48.614 } 00:11:48.614 ] 00:11:48.614 }' 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.614 18:52:31 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.874 18:52:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:49.134 "name": "raid_bdev1", 00:11:49.134 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:49.134 "strip_size_kb": 0, 00:11:49.134 "state": "online", 00:11:49.134 "raid_level": "raid1", 00:11:49.134 "superblock": false, 00:11:49.134 "num_base_bdevs": 2, 00:11:49.134 "num_base_bdevs_discovered": 1, 00:11:49.134 "num_base_bdevs_operational": 1, 00:11:49.134 "base_bdevs_list": [ 00:11:49.134 { 00:11:49.134 "name": null, 00:11:49.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.134 "is_configured": false, 00:11:49.134 "data_offset": 0, 00:11:49.134 "data_size": 65536 00:11:49.134 }, 00:11:49.134 { 00:11:49.134 "name": "BaseBdev2", 00:11:49.134 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:49.134 "is_configured": true, 00:11:49.134 "data_offset": 0, 00:11:49.134 "data_size": 65536 00:11:49.134 } 00:11:49.134 ] 00:11:49.134 }' 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.134 [2024-11-16 18:52:32.426136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:11:49.134 [2024-11-16 18:52:32.441824] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.134 18:52:32 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:11:49.134 [2024-11-16 18:52:32.443607] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:11:50.073 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.073 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.074 "name": "raid_bdev1", 00:11:50.074 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:50.074 "strip_size_kb": 0, 00:11:50.074 "state": "online", 00:11:50.074 "raid_level": "raid1", 00:11:50.074 "superblock": false, 00:11:50.074 "num_base_bdevs": 2, 00:11:50.074 "num_base_bdevs_discovered": 2, 00:11:50.074 "num_base_bdevs_operational": 2, 00:11:50.074 "process": { 00:11:50.074 "type": "rebuild", 00:11:50.074 "target": "spare", 00:11:50.074 "progress": { 00:11:50.074 "blocks": 20480, 00:11:50.074 "percent": 31 00:11:50.074 } 00:11:50.074 }, 00:11:50.074 "base_bdevs_list": [ 00:11:50.074 { 00:11:50.074 "name": "spare", 00:11:50.074 "uuid": "3fe9c309-bb02-582d-abd1-824048d96991", 00:11:50.074 "is_configured": true, 00:11:50.074 "data_offset": 0, 00:11:50.074 "data_size": 65536 00:11:50.074 }, 00:11:50.074 { 00:11:50.074 "name": "BaseBdev2", 00:11:50.074 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:50.074 "is_configured": true, 00:11:50.074 "data_offset": 0, 00:11:50.074 "data_size": 65536 00:11:50.074 } 00:11:50.074 ] 00:11:50.074 }' 00:11:50.074 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=355 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:50.334 "name": "raid_bdev1", 00:11:50.334 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:50.334 "strip_size_kb": 0, 00:11:50.334 "state": "online", 00:11:50.334 "raid_level": "raid1", 00:11:50.334 "superblock": false, 00:11:50.334 "num_base_bdevs": 2, 00:11:50.334 "num_base_bdevs_discovered": 2, 00:11:50.334 "num_base_bdevs_operational": 2, 00:11:50.334 "process": { 00:11:50.334 "type": "rebuild", 00:11:50.334 "target": "spare", 00:11:50.334 "progress": { 00:11:50.334 "blocks": 22528, 00:11:50.334 "percent": 34 00:11:50.334 } 00:11:50.334 }, 00:11:50.334 "base_bdevs_list": [ 00:11:50.334 { 00:11:50.334 "name": "spare", 00:11:50.334 "uuid": "3fe9c309-bb02-582d-abd1-824048d96991", 00:11:50.334 "is_configured": true, 00:11:50.334 "data_offset": 0, 00:11:50.334 "data_size": 65536 00:11:50.334 }, 00:11:50.334 { 00:11:50.334 "name": "BaseBdev2", 00:11:50.334 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:50.334 "is_configured": true, 00:11:50.334 "data_offset": 0, 00:11:50.334 "data_size": 65536 00:11:50.334 } 00:11:50.334 ] 00:11:50.334 }' 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:50.334 18:52:33 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.274 18:52:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.533 18:52:34 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.533 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:51.533 "name": "raid_bdev1", 00:11:51.533 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:51.533 "strip_size_kb": 0, 00:11:51.533 "state": "online", 00:11:51.533 "raid_level": "raid1", 00:11:51.533 "superblock": false, 00:11:51.533 "num_base_bdevs": 2, 00:11:51.534 "num_base_bdevs_discovered": 2, 00:11:51.534 "num_base_bdevs_operational": 2, 00:11:51.534 "process": { 00:11:51.534 "type": "rebuild", 00:11:51.534 "target": "spare", 00:11:51.534 "progress": { 00:11:51.534 "blocks": 45056, 00:11:51.534 "percent": 68 00:11:51.534 } 00:11:51.534 }, 00:11:51.534 "base_bdevs_list": [ 00:11:51.534 { 00:11:51.534 "name": "spare", 00:11:51.534 "uuid": "3fe9c309-bb02-582d-abd1-824048d96991", 00:11:51.534 "is_configured": true, 00:11:51.534 "data_offset": 0, 00:11:51.534 "data_size": 65536 00:11:51.534 }, 00:11:51.534 { 00:11:51.534 "name": "BaseBdev2", 00:11:51.534 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:51.534 "is_configured": true, 00:11:51.534 "data_offset": 0, 00:11:51.534 "data_size": 65536 00:11:51.534 } 00:11:51.534 ] 00:11:51.534 }' 00:11:51.534 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:51.534 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:11:51.534 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:51.534 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:11:51.534 18:52:34 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:11:52.472 [2024-11-16 18:52:35.656519] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:11:52.472 [2024-11-16 18:52:35.656684] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:11:52.472 [2024-11-16 18:52:35.656756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.472 "name": "raid_bdev1", 00:11:52.472 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:52.472 "strip_size_kb": 0, 00:11:52.472 "state": "online", 00:11:52.472 "raid_level": "raid1", 00:11:52.472 "superblock": false, 00:11:52.472 "num_base_bdevs": 2, 00:11:52.472 "num_base_bdevs_discovered": 2, 00:11:52.472 "num_base_bdevs_operational": 2, 00:11:52.472 "base_bdevs_list": [ 00:11:52.472 { 00:11:52.472 "name": "spare", 00:11:52.472 "uuid": "3fe9c309-bb02-582d-abd1-824048d96991", 00:11:52.472 "is_configured": true, 00:11:52.472 "data_offset": 0, 00:11:52.472 "data_size": 65536 00:11:52.472 }, 00:11:52.472 { 00:11:52.472 "name": "BaseBdev2", 00:11:52.472 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:52.472 "is_configured": true, 00:11:52.472 "data_offset": 0, 00:11:52.472 "data_size": 65536 00:11:52.472 } 00:11:52.472 ] 00:11:52.472 }' 00:11:52.472 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.732 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:11:52.732 18:52:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:11:52.732 "name": "raid_bdev1", 00:11:52.732 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:52.732 "strip_size_kb": 0, 00:11:52.732 "state": "online", 00:11:52.732 "raid_level": "raid1", 00:11:52.732 "superblock": false, 00:11:52.732 "num_base_bdevs": 2, 00:11:52.732 "num_base_bdevs_discovered": 2, 00:11:52.732 "num_base_bdevs_operational": 2, 00:11:52.732 "base_bdevs_list": [ 00:11:52.732 { 00:11:52.732 "name": "spare", 00:11:52.732 "uuid": "3fe9c309-bb02-582d-abd1-824048d96991", 00:11:52.732 "is_configured": true, 00:11:52.732 "data_offset": 0, 00:11:52.732 "data_size": 65536 00:11:52.732 }, 00:11:52.732 { 00:11:52.732 "name": "BaseBdev2", 00:11:52.732 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:52.732 "is_configured": true, 00:11:52.732 "data_offset": 0, 00:11:52.732 "data_size": 65536 00:11:52.732 } 00:11:52.732 ] 00:11:52.732 }' 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:52.732 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.733 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.992 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.992 "name": "raid_bdev1", 00:11:52.992 "uuid": "ffe55446-20eb-4d7b-bce0-b94d8c0b1e5c", 00:11:52.992 "strip_size_kb": 0, 00:11:52.992 "state": "online", 00:11:52.992 "raid_level": "raid1", 00:11:52.992 "superblock": false, 00:11:52.992 "num_base_bdevs": 2, 00:11:52.992 "num_base_bdevs_discovered": 2, 00:11:52.992 "num_base_bdevs_operational": 2, 00:11:52.992 "base_bdevs_list": [ 00:11:52.992 { 00:11:52.992 "name": "spare", 00:11:52.992 "uuid": "3fe9c309-bb02-582d-abd1-824048d96991", 00:11:52.992 "is_configured": true, 00:11:52.992 "data_offset": 0, 00:11:52.992 "data_size": 65536 00:11:52.992 }, 00:11:52.992 { 00:11:52.992 "name": "BaseBdev2", 00:11:52.992 "uuid": "eb3978ea-d8ef-53e1-8bf7-29619ac8784f", 00:11:52.992 "is_configured": true, 00:11:52.992 "data_offset": 0, 00:11:52.992 "data_size": 65536 00:11:52.992 } 00:11:52.992 ] 00:11:52.992 }' 00:11:52.992 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.992 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.252 [2024-11-16 18:52:36.585299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.252 [2024-11-16 18:52:36.585374] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.252 [2024-11-16 18:52:36.585478] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.252 [2024-11-16 18:52:36.585577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.252 [2024-11-16 18:52:36.585655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:53.252 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:11:53.512 /dev/nbd0 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.512 1+0 records in 00:11:53.512 1+0 records out 00:11:53.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225697 s, 18.1 MB/s 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:53.512 18:52:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:11:53.772 /dev/nbd1 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:53.772 1+0 records in 00:11:53.772 1+0 records out 00:11:53.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422932 s, 9.7 MB/s 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:11:53.772 18:52:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.031 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75057 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75057 ']' 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75057 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:54.292 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75057 00:11:54.551 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:54.551 killing process with pid 75057 00:11:54.551 Received shutdown signal, test time was about 60.000000 seconds 00:11:54.551 00:11:54.551 Latency(us) 00:11:54.551 [2024-11-16T18:52:38.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:54.551 [2024-11-16T18:52:38.023Z] =================================================================================================================== 00:11:54.551 [2024-11-16T18:52:38.023Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:54.551 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:54.551 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75057' 00:11:54.551 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75057 00:11:54.551 [2024-11-16 18:52:37.775294] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:54.551 18:52:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75057 00:11:54.810 [2024-11-16 18:52:38.064869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:11:55.748 00:11:55.748 real 0m15.305s 00:11:55.748 user 0m16.757s 00:11:55.748 sys 0m2.933s 00:11:55.748 ************************************ 00:11:55.748 END TEST raid_rebuild_test 00:11:55.748 ************************************ 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.748 18:52:39 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:11:55.748 18:52:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:11:55.748 18:52:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.748 18:52:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.748 ************************************ 00:11:55.748 START TEST raid_rebuild_test_sb 00:11:55.748 ************************************ 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:11:55.748 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75476 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75476 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75476 ']' 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.008 18:52:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.008 [2024-11-16 18:52:39.308362] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:56.008 [2024-11-16 18:52:39.308583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75476 ] 00:11:56.008 I/O size of 3145728 is greater than zero copy threshold (65536). 00:11:56.008 Zero copy mechanism will not be used. 00:11:56.267 [2024-11-16 18:52:39.484043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.267 [2024-11-16 18:52:39.594140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.526 [2024-11-16 18:52:39.786028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.526 [2024-11-16 18:52:39.786165] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.785 BaseBdev1_malloc 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.785 [2024-11-16 18:52:40.177077] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:11:56.785 [2024-11-16 18:52:40.177151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.785 [2024-11-16 18:52:40.177175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:56.785 [2024-11-16 18:52:40.177187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.785 [2024-11-16 18:52:40.179238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.785 [2024-11-16 18:52:40.179352] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:56.785 BaseBdev1 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.785 BaseBdev2_malloc 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.785 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.785 [2024-11-16 18:52:40.231402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:11:56.785 [2024-11-16 18:52:40.231465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.786 [2024-11-16 18:52:40.231485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:56.786 [2024-11-16 18:52:40.231497] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.786 [2024-11-16 18:52:40.233521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.786 [2024-11-16 18:52:40.233574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:56.786 BaseBdev2 00:11:56.786 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.786 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:11:56.786 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.786 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.046 spare_malloc 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.046 spare_delay 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.046 [2024-11-16 18:52:40.311084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:11:57.046 [2024-11-16 18:52:40.311149] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:57.046 [2024-11-16 18:52:40.311169] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:11:57.046 [2024-11-16 18:52:40.311180] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:57.046 [2024-11-16 18:52:40.313300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:57.046 [2024-11-16 18:52:40.313428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:11:57.046 spare 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.046 [2024-11-16 18:52:40.323138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.046 [2024-11-16 18:52:40.325169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.046 [2024-11-16 18:52:40.325347] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:57.046 [2024-11-16 18:52:40.325364] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:57.046 [2024-11-16 18:52:40.325593] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:57.046 [2024-11-16 18:52:40.325785] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:57.046 [2024-11-16 18:52:40.325796] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:57.046 [2024-11-16 18:52:40.325945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.046 "name": "raid_bdev1", 00:11:57.046 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:11:57.046 "strip_size_kb": 0, 00:11:57.046 "state": "online", 00:11:57.046 "raid_level": "raid1", 00:11:57.046 "superblock": true, 00:11:57.046 "num_base_bdevs": 2, 00:11:57.046 "num_base_bdevs_discovered": 2, 00:11:57.046 "num_base_bdevs_operational": 2, 00:11:57.046 "base_bdevs_list": [ 00:11:57.046 { 00:11:57.046 "name": "BaseBdev1", 00:11:57.046 "uuid": "09865aa3-0679-5dfd-8431-8ee74fd484ec", 00:11:57.046 "is_configured": true, 00:11:57.046 "data_offset": 2048, 00:11:57.046 "data_size": 63488 00:11:57.046 }, 00:11:57.046 { 00:11:57.046 "name": "BaseBdev2", 00:11:57.046 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:11:57.046 "is_configured": true, 00:11:57.046 "data_offset": 2048, 00:11:57.046 "data_size": 63488 00:11:57.046 } 00:11:57.046 ] 00:11:57.046 }' 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.046 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.305 [2024-11-16 18:52:40.722711] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.305 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:57.565 18:52:40 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:11:57.565 [2024-11-16 18:52:40.990020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:57.565 /dev/nbd0 00:11:57.565 18:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:11:57.565 18:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:11:57.565 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:11:57.565 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:11:57.565 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:11:57.565 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:11:57.824 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:11:57.824 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:11:57.824 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:11:57.824 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:11:57.824 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:11:57.824 1+0 records in 00:11:57.824 1+0 records out 00:11:57.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000319973 s, 12.8 MB/s 00:11:57.824 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.824 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:11:57.825 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:11:57.825 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:11:57.825 18:52:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:11:57.825 18:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:11:57.825 18:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:11:57.825 18:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:11:57.825 18:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:11:57.825 18:52:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:02.039 63488+0 records in 00:12:02.039 63488+0 records out 00:12:02.039 32505856 bytes (33 MB, 31 MiB) copied, 3.59457 s, 9.0 MB/s 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:02.039 [2024-11-16 18:52:44.874920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.039 [2024-11-16 18:52:44.886993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.039 "name": "raid_bdev1", 00:12:02.039 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:02.039 "strip_size_kb": 0, 00:12:02.039 "state": "online", 00:12:02.039 "raid_level": "raid1", 00:12:02.039 "superblock": true, 00:12:02.039 "num_base_bdevs": 2, 00:12:02.039 "num_base_bdevs_discovered": 1, 00:12:02.039 "num_base_bdevs_operational": 1, 00:12:02.039 "base_bdevs_list": [ 00:12:02.039 { 00:12:02.039 "name": null, 00:12:02.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.039 "is_configured": false, 00:12:02.039 "data_offset": 0, 00:12:02.039 "data_size": 63488 00:12:02.039 }, 00:12:02.039 { 00:12:02.039 "name": "BaseBdev2", 00:12:02.039 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:02.039 "is_configured": true, 00:12:02.039 "data_offset": 2048, 00:12:02.039 "data_size": 63488 00:12:02.039 } 00:12:02.039 ] 00:12:02.039 }' 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.039 18:52:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.039 18:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:02.039 18:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.039 18:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.039 [2024-11-16 18:52:45.326268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:02.039 [2024-11-16 18:52:45.343302] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:02.039 18:52:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.039 18:52:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:02.039 [2024-11-16 18:52:45.345297] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:02.978 "name": "raid_bdev1", 00:12:02.978 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:02.978 "strip_size_kb": 0, 00:12:02.978 "state": "online", 00:12:02.978 "raid_level": "raid1", 00:12:02.978 "superblock": true, 00:12:02.978 "num_base_bdevs": 2, 00:12:02.978 "num_base_bdevs_discovered": 2, 00:12:02.978 "num_base_bdevs_operational": 2, 00:12:02.978 "process": { 00:12:02.978 "type": "rebuild", 00:12:02.978 "target": "spare", 00:12:02.978 "progress": { 00:12:02.978 "blocks": 20480, 00:12:02.978 "percent": 32 00:12:02.978 } 00:12:02.978 }, 00:12:02.978 "base_bdevs_list": [ 00:12:02.978 { 00:12:02.978 "name": "spare", 00:12:02.978 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:02.978 "is_configured": true, 00:12:02.978 "data_offset": 2048, 00:12:02.978 "data_size": 63488 00:12:02.978 }, 00:12:02.978 { 00:12:02.978 "name": "BaseBdev2", 00:12:02.978 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:02.978 "is_configured": true, 00:12:02.978 "data_offset": 2048, 00:12:02.978 "data_size": 63488 00:12:02.978 } 00:12:02.978 ] 00:12:02.978 }' 00:12:02.978 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.237 [2024-11-16 18:52:46.505303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.237 [2024-11-16 18:52:46.550230] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:03.237 [2024-11-16 18:52:46.550308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.237 [2024-11-16 18:52:46.550322] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:03.237 [2024-11-16 18:52:46.550331] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.237 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.238 "name": "raid_bdev1", 00:12:03.238 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:03.238 "strip_size_kb": 0, 00:12:03.238 "state": "online", 00:12:03.238 "raid_level": "raid1", 00:12:03.238 "superblock": true, 00:12:03.238 "num_base_bdevs": 2, 00:12:03.238 "num_base_bdevs_discovered": 1, 00:12:03.238 "num_base_bdevs_operational": 1, 00:12:03.238 "base_bdevs_list": [ 00:12:03.238 { 00:12:03.238 "name": null, 00:12:03.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.238 "is_configured": false, 00:12:03.238 "data_offset": 0, 00:12:03.238 "data_size": 63488 00:12:03.238 }, 00:12:03.238 { 00:12:03.238 "name": "BaseBdev2", 00:12:03.238 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:03.238 "is_configured": true, 00:12:03.238 "data_offset": 2048, 00:12:03.238 "data_size": 63488 00:12:03.238 } 00:12:03.238 ] 00:12:03.238 }' 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.238 18:52:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:03.809 "name": "raid_bdev1", 00:12:03.809 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:03.809 "strip_size_kb": 0, 00:12:03.809 "state": "online", 00:12:03.809 "raid_level": "raid1", 00:12:03.809 "superblock": true, 00:12:03.809 "num_base_bdevs": 2, 00:12:03.809 "num_base_bdevs_discovered": 1, 00:12:03.809 "num_base_bdevs_operational": 1, 00:12:03.809 "base_bdevs_list": [ 00:12:03.809 { 00:12:03.809 "name": null, 00:12:03.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.809 "is_configured": false, 00:12:03.809 "data_offset": 0, 00:12:03.809 "data_size": 63488 00:12:03.809 }, 00:12:03.809 { 00:12:03.809 "name": "BaseBdev2", 00:12:03.809 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:03.809 "is_configured": true, 00:12:03.809 "data_offset": 2048, 00:12:03.809 "data_size": 63488 00:12:03.809 } 00:12:03.809 ] 00:12:03.809 }' 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.809 [2024-11-16 18:52:47.172113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:03.809 [2024-11-16 18:52:47.187680] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.809 18:52:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:03.809 [2024-11-16 18:52:47.189484] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:04.749 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:04.749 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:04.749 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:04.749 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:04.749 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:04.749 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.749 18:52:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.749 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:04.749 18:52:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.009 "name": "raid_bdev1", 00:12:05.009 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:05.009 "strip_size_kb": 0, 00:12:05.009 "state": "online", 00:12:05.009 "raid_level": "raid1", 00:12:05.009 "superblock": true, 00:12:05.009 "num_base_bdevs": 2, 00:12:05.009 "num_base_bdevs_discovered": 2, 00:12:05.009 "num_base_bdevs_operational": 2, 00:12:05.009 "process": { 00:12:05.009 "type": "rebuild", 00:12:05.009 "target": "spare", 00:12:05.009 "progress": { 00:12:05.009 "blocks": 20480, 00:12:05.009 "percent": 32 00:12:05.009 } 00:12:05.009 }, 00:12:05.009 "base_bdevs_list": [ 00:12:05.009 { 00:12:05.009 "name": "spare", 00:12:05.009 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:05.009 "is_configured": true, 00:12:05.009 "data_offset": 2048, 00:12:05.009 "data_size": 63488 00:12:05.009 }, 00:12:05.009 { 00:12:05.009 "name": "BaseBdev2", 00:12:05.009 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:05.009 "is_configured": true, 00:12:05.009 "data_offset": 2048, 00:12:05.009 "data_size": 63488 00:12:05.009 } 00:12:05.009 ] 00:12:05.009 }' 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:05.009 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=370 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.009 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:05.009 "name": "raid_bdev1", 00:12:05.009 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:05.009 "strip_size_kb": 0, 00:12:05.009 "state": "online", 00:12:05.009 "raid_level": "raid1", 00:12:05.009 "superblock": true, 00:12:05.009 "num_base_bdevs": 2, 00:12:05.009 "num_base_bdevs_discovered": 2, 00:12:05.009 "num_base_bdevs_operational": 2, 00:12:05.009 "process": { 00:12:05.009 "type": "rebuild", 00:12:05.009 "target": "spare", 00:12:05.009 "progress": { 00:12:05.009 "blocks": 22528, 00:12:05.009 "percent": 35 00:12:05.009 } 00:12:05.009 }, 00:12:05.010 "base_bdevs_list": [ 00:12:05.010 { 00:12:05.010 "name": "spare", 00:12:05.010 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:05.010 "is_configured": true, 00:12:05.010 "data_offset": 2048, 00:12:05.010 "data_size": 63488 00:12:05.010 }, 00:12:05.010 { 00:12:05.010 "name": "BaseBdev2", 00:12:05.010 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:05.010 "is_configured": true, 00:12:05.010 "data_offset": 2048, 00:12:05.010 "data_size": 63488 00:12:05.010 } 00:12:05.010 ] 00:12:05.010 }' 00:12:05.010 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:05.010 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:05.010 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:05.010 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:05.010 18:52:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:06.408 "name": "raid_bdev1", 00:12:06.408 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:06.408 "strip_size_kb": 0, 00:12:06.408 "state": "online", 00:12:06.408 "raid_level": "raid1", 00:12:06.408 "superblock": true, 00:12:06.408 "num_base_bdevs": 2, 00:12:06.408 "num_base_bdevs_discovered": 2, 00:12:06.408 "num_base_bdevs_operational": 2, 00:12:06.408 "process": { 00:12:06.408 "type": "rebuild", 00:12:06.408 "target": "spare", 00:12:06.408 "progress": { 00:12:06.408 "blocks": 45056, 00:12:06.408 "percent": 70 00:12:06.408 } 00:12:06.408 }, 00:12:06.408 "base_bdevs_list": [ 00:12:06.408 { 00:12:06.408 "name": "spare", 00:12:06.408 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:06.408 "is_configured": true, 00:12:06.408 "data_offset": 2048, 00:12:06.408 "data_size": 63488 00:12:06.408 }, 00:12:06.408 { 00:12:06.408 "name": "BaseBdev2", 00:12:06.408 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:06.408 "is_configured": true, 00:12:06.408 "data_offset": 2048, 00:12:06.408 "data_size": 63488 00:12:06.408 } 00:12:06.408 ] 00:12:06.408 }' 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:06.408 18:52:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:06.994 [2024-11-16 18:52:50.301893] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:06.994 [2024-11-16 18:52:50.301973] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:06.994 [2024-11-16 18:52:50.302075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.254 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.254 "name": "raid_bdev1", 00:12:07.254 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:07.254 "strip_size_kb": 0, 00:12:07.254 "state": "online", 00:12:07.254 "raid_level": "raid1", 00:12:07.255 "superblock": true, 00:12:07.255 "num_base_bdevs": 2, 00:12:07.255 "num_base_bdevs_discovered": 2, 00:12:07.255 "num_base_bdevs_operational": 2, 00:12:07.255 "base_bdevs_list": [ 00:12:07.255 { 00:12:07.255 "name": "spare", 00:12:07.255 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:07.255 "is_configured": true, 00:12:07.255 "data_offset": 2048, 00:12:07.255 "data_size": 63488 00:12:07.255 }, 00:12:07.255 { 00:12:07.255 "name": "BaseBdev2", 00:12:07.255 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:07.255 "is_configured": true, 00:12:07.255 "data_offset": 2048, 00:12:07.255 "data_size": 63488 00:12:07.255 } 00:12:07.255 ] 00:12:07.255 }' 00:12:07.255 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.255 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:07.255 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:07.515 "name": "raid_bdev1", 00:12:07.515 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:07.515 "strip_size_kb": 0, 00:12:07.515 "state": "online", 00:12:07.515 "raid_level": "raid1", 00:12:07.515 "superblock": true, 00:12:07.515 "num_base_bdevs": 2, 00:12:07.515 "num_base_bdevs_discovered": 2, 00:12:07.515 "num_base_bdevs_operational": 2, 00:12:07.515 "base_bdevs_list": [ 00:12:07.515 { 00:12:07.515 "name": "spare", 00:12:07.515 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:07.515 "is_configured": true, 00:12:07.515 "data_offset": 2048, 00:12:07.515 "data_size": 63488 00:12:07.515 }, 00:12:07.515 { 00:12:07.515 "name": "BaseBdev2", 00:12:07.515 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:07.515 "is_configured": true, 00:12:07.515 "data_offset": 2048, 00:12:07.515 "data_size": 63488 00:12:07.515 } 00:12:07.515 ] 00:12:07.515 }' 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.515 "name": "raid_bdev1", 00:12:07.515 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:07.515 "strip_size_kb": 0, 00:12:07.515 "state": "online", 00:12:07.515 "raid_level": "raid1", 00:12:07.515 "superblock": true, 00:12:07.515 "num_base_bdevs": 2, 00:12:07.515 "num_base_bdevs_discovered": 2, 00:12:07.515 "num_base_bdevs_operational": 2, 00:12:07.515 "base_bdevs_list": [ 00:12:07.515 { 00:12:07.515 "name": "spare", 00:12:07.515 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:07.515 "is_configured": true, 00:12:07.515 "data_offset": 2048, 00:12:07.515 "data_size": 63488 00:12:07.515 }, 00:12:07.515 { 00:12:07.515 "name": "BaseBdev2", 00:12:07.515 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:07.515 "is_configured": true, 00:12:07.515 "data_offset": 2048, 00:12:07.515 "data_size": 63488 00:12:07.515 } 00:12:07.515 ] 00:12:07.515 }' 00:12:07.515 18:52:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.516 18:52:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.084 [2024-11-16 18:52:51.319865] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:08.084 [2024-11-16 18:52:51.319898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:08.084 [2024-11-16 18:52:51.319979] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:08.084 [2024-11-16 18:52:51.320042] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:08.084 [2024-11-16 18:52:51.320054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:08.084 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:08.344 /dev/nbd0 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.344 1+0 records in 00:12:08.344 1+0 records out 00:12:08.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478025 s, 8.6 MB/s 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:08.344 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:08.344 /dev/nbd1 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:08.604 1+0 records in 00:12:08.604 1+0 records out 00:12:08.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530232 s, 7.7 MB/s 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:08.604 18:52:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:08.604 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:08.604 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:08.604 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:08.604 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:08.604 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:08.604 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.604 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:08.864 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.124 [2024-11-16 18:52:52.482813] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:09.124 [2024-11-16 18:52:52.482873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:09.124 [2024-11-16 18:52:52.482897] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:09.124 [2024-11-16 18:52:52.482907] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:09.124 [2024-11-16 18:52:52.485179] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:09.124 [2024-11-16 18:52:52.485272] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:09.124 [2024-11-16 18:52:52.485375] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:09.124 [2024-11-16 18:52:52.485438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:09.124 [2024-11-16 18:52:52.485624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:09.124 spare 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.124 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.124 [2024-11-16 18:52:52.585555] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:09.124 [2024-11-16 18:52:52.585593] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:09.124 [2024-11-16 18:52:52.585950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:09.124 [2024-11-16 18:52:52.586144] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:09.125 [2024-11-16 18:52:52.586161] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:09.125 [2024-11-16 18:52:52.586352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.125 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.384 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.384 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.384 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.384 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.384 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.384 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.384 "name": "raid_bdev1", 00:12:09.384 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:09.384 "strip_size_kb": 0, 00:12:09.384 "state": "online", 00:12:09.384 "raid_level": "raid1", 00:12:09.384 "superblock": true, 00:12:09.384 "num_base_bdevs": 2, 00:12:09.385 "num_base_bdevs_discovered": 2, 00:12:09.385 "num_base_bdevs_operational": 2, 00:12:09.385 "base_bdevs_list": [ 00:12:09.385 { 00:12:09.385 "name": "spare", 00:12:09.385 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:09.385 "is_configured": true, 00:12:09.385 "data_offset": 2048, 00:12:09.385 "data_size": 63488 00:12:09.385 }, 00:12:09.385 { 00:12:09.385 "name": "BaseBdev2", 00:12:09.385 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:09.385 "is_configured": true, 00:12:09.385 "data_offset": 2048, 00:12:09.385 "data_size": 63488 00:12:09.385 } 00:12:09.385 ] 00:12:09.385 }' 00:12:09.385 18:52:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.385 18:52:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:09.645 "name": "raid_bdev1", 00:12:09.645 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:09.645 "strip_size_kb": 0, 00:12:09.645 "state": "online", 00:12:09.645 "raid_level": "raid1", 00:12:09.645 "superblock": true, 00:12:09.645 "num_base_bdevs": 2, 00:12:09.645 "num_base_bdevs_discovered": 2, 00:12:09.645 "num_base_bdevs_operational": 2, 00:12:09.645 "base_bdevs_list": [ 00:12:09.645 { 00:12:09.645 "name": "spare", 00:12:09.645 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:09.645 "is_configured": true, 00:12:09.645 "data_offset": 2048, 00:12:09.645 "data_size": 63488 00:12:09.645 }, 00:12:09.645 { 00:12:09.645 "name": "BaseBdev2", 00:12:09.645 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:09.645 "is_configured": true, 00:12:09.645 "data_offset": 2048, 00:12:09.645 "data_size": 63488 00:12:09.645 } 00:12:09.645 ] 00:12:09.645 }' 00:12:09.645 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.905 [2024-11-16 18:52:53.213624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.905 "name": "raid_bdev1", 00:12:09.905 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:09.905 "strip_size_kb": 0, 00:12:09.905 "state": "online", 00:12:09.905 "raid_level": "raid1", 00:12:09.905 "superblock": true, 00:12:09.905 "num_base_bdevs": 2, 00:12:09.905 "num_base_bdevs_discovered": 1, 00:12:09.905 "num_base_bdevs_operational": 1, 00:12:09.905 "base_bdevs_list": [ 00:12:09.905 { 00:12:09.905 "name": null, 00:12:09.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.905 "is_configured": false, 00:12:09.905 "data_offset": 0, 00:12:09.905 "data_size": 63488 00:12:09.905 }, 00:12:09.905 { 00:12:09.905 "name": "BaseBdev2", 00:12:09.905 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:09.905 "is_configured": true, 00:12:09.905 "data_offset": 2048, 00:12:09.905 "data_size": 63488 00:12:09.905 } 00:12:09.905 ] 00:12:09.905 }' 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.905 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.476 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:10.476 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.476 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.476 [2024-11-16 18:52:53.676892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.476 [2024-11-16 18:52:53.677188] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:10.476 [2024-11-16 18:52:53.677262] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:10.476 [2024-11-16 18:52:53.677340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:10.476 [2024-11-16 18:52:53.694038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:10.476 18:52:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.476 [2024-11-16 18:52:53.696103] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:10.476 18:52:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:11.415 "name": "raid_bdev1", 00:12:11.415 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:11.415 "strip_size_kb": 0, 00:12:11.415 "state": "online", 00:12:11.415 "raid_level": "raid1", 00:12:11.415 "superblock": true, 00:12:11.415 "num_base_bdevs": 2, 00:12:11.415 "num_base_bdevs_discovered": 2, 00:12:11.415 "num_base_bdevs_operational": 2, 00:12:11.415 "process": { 00:12:11.415 "type": "rebuild", 00:12:11.415 "target": "spare", 00:12:11.415 "progress": { 00:12:11.415 "blocks": 20480, 00:12:11.415 "percent": 32 00:12:11.415 } 00:12:11.415 }, 00:12:11.415 "base_bdevs_list": [ 00:12:11.415 { 00:12:11.415 "name": "spare", 00:12:11.415 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:11.415 "is_configured": true, 00:12:11.415 "data_offset": 2048, 00:12:11.415 "data_size": 63488 00:12:11.415 }, 00:12:11.415 { 00:12:11.415 "name": "BaseBdev2", 00:12:11.415 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:11.415 "is_configured": true, 00:12:11.415 "data_offset": 2048, 00:12:11.415 "data_size": 63488 00:12:11.415 } 00:12:11.415 ] 00:12:11.415 }' 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.415 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.415 [2024-11-16 18:52:54.863800] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.675 [2024-11-16 18:52:54.901151] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:11.676 [2024-11-16 18:52:54.901215] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.676 [2024-11-16 18:52:54.901230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:11.676 [2024-11-16 18:52:54.901239] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.676 "name": "raid_bdev1", 00:12:11.676 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:11.676 "strip_size_kb": 0, 00:12:11.676 "state": "online", 00:12:11.676 "raid_level": "raid1", 00:12:11.676 "superblock": true, 00:12:11.676 "num_base_bdevs": 2, 00:12:11.676 "num_base_bdevs_discovered": 1, 00:12:11.676 "num_base_bdevs_operational": 1, 00:12:11.676 "base_bdevs_list": [ 00:12:11.676 { 00:12:11.676 "name": null, 00:12:11.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.676 "is_configured": false, 00:12:11.676 "data_offset": 0, 00:12:11.676 "data_size": 63488 00:12:11.676 }, 00:12:11.676 { 00:12:11.676 "name": "BaseBdev2", 00:12:11.676 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:11.676 "is_configured": true, 00:12:11.676 "data_offset": 2048, 00:12:11.676 "data_size": 63488 00:12:11.676 } 00:12:11.676 ] 00:12:11.676 }' 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.676 18:52:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.935 18:52:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:11.935 18:52:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.936 18:52:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.196 [2024-11-16 18:52:55.408067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:12.196 [2024-11-16 18:52:55.408233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.196 [2024-11-16 18:52:55.408274] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:12.196 [2024-11-16 18:52:55.408307] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.196 [2024-11-16 18:52:55.408812] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.196 [2024-11-16 18:52:55.408877] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:12.196 [2024-11-16 18:52:55.409007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:12.196 [2024-11-16 18:52:55.409052] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:12.196 [2024-11-16 18:52:55.409095] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:12.196 [2024-11-16 18:52:55.409153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:12.196 [2024-11-16 18:52:55.425149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:12:12.196 spare 00:12:12.196 18:52:55 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.196 [2024-11-16 18:52:55.427068] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:12.196 18:52:55 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.136 "name": "raid_bdev1", 00:12:13.136 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:13.136 "strip_size_kb": 0, 00:12:13.136 "state": "online", 00:12:13.136 "raid_level": "raid1", 00:12:13.136 "superblock": true, 00:12:13.136 "num_base_bdevs": 2, 00:12:13.136 "num_base_bdevs_discovered": 2, 00:12:13.136 "num_base_bdevs_operational": 2, 00:12:13.136 "process": { 00:12:13.136 "type": "rebuild", 00:12:13.136 "target": "spare", 00:12:13.136 "progress": { 00:12:13.136 "blocks": 20480, 00:12:13.136 "percent": 32 00:12:13.136 } 00:12:13.136 }, 00:12:13.136 "base_bdevs_list": [ 00:12:13.136 { 00:12:13.136 "name": "spare", 00:12:13.136 "uuid": "63c6c65d-5a84-5de4-8f77-aa672321da12", 00:12:13.136 "is_configured": true, 00:12:13.136 "data_offset": 2048, 00:12:13.136 "data_size": 63488 00:12:13.136 }, 00:12:13.136 { 00:12:13.136 "name": "BaseBdev2", 00:12:13.136 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:13.136 "is_configured": true, 00:12:13.136 "data_offset": 2048, 00:12:13.136 "data_size": 63488 00:12:13.136 } 00:12:13.136 ] 00:12:13.136 }' 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.136 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.136 [2024-11-16 18:52:56.590611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:13.397 [2024-11-16 18:52:56.632158] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:13.397 [2024-11-16 18:52:56.632315] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:13.397 [2024-11-16 18:52:56.632366] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:13.397 [2024-11-16 18:52:56.632388] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.397 "name": "raid_bdev1", 00:12:13.397 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:13.397 "strip_size_kb": 0, 00:12:13.397 "state": "online", 00:12:13.397 "raid_level": "raid1", 00:12:13.397 "superblock": true, 00:12:13.397 "num_base_bdevs": 2, 00:12:13.397 "num_base_bdevs_discovered": 1, 00:12:13.397 "num_base_bdevs_operational": 1, 00:12:13.397 "base_bdevs_list": [ 00:12:13.397 { 00:12:13.397 "name": null, 00:12:13.397 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.397 "is_configured": false, 00:12:13.397 "data_offset": 0, 00:12:13.397 "data_size": 63488 00:12:13.397 }, 00:12:13.397 { 00:12:13.397 "name": "BaseBdev2", 00:12:13.397 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:13.397 "is_configured": true, 00:12:13.397 "data_offset": 2048, 00:12:13.397 "data_size": 63488 00:12:13.397 } 00:12:13.397 ] 00:12:13.397 }' 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.397 18:52:56 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:13.973 "name": "raid_bdev1", 00:12:13.973 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:13.973 "strip_size_kb": 0, 00:12:13.973 "state": "online", 00:12:13.973 "raid_level": "raid1", 00:12:13.973 "superblock": true, 00:12:13.973 "num_base_bdevs": 2, 00:12:13.973 "num_base_bdevs_discovered": 1, 00:12:13.973 "num_base_bdevs_operational": 1, 00:12:13.973 "base_bdevs_list": [ 00:12:13.973 { 00:12:13.973 "name": null, 00:12:13.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.973 "is_configured": false, 00:12:13.973 "data_offset": 0, 00:12:13.973 "data_size": 63488 00:12:13.973 }, 00:12:13.973 { 00:12:13.973 "name": "BaseBdev2", 00:12:13.973 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:13.973 "is_configured": true, 00:12:13.973 "data_offset": 2048, 00:12:13.973 "data_size": 63488 00:12:13.973 } 00:12:13.973 ] 00:12:13.973 }' 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.973 [2024-11-16 18:52:57.318709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:13.973 [2024-11-16 18:52:57.318772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.973 [2024-11-16 18:52:57.318796] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:13.973 [2024-11-16 18:52:57.318817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.973 [2024-11-16 18:52:57.319316] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.973 [2024-11-16 18:52:57.319338] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:13.973 [2024-11-16 18:52:57.319423] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:13.973 [2024-11-16 18:52:57.319437] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:13.973 [2024-11-16 18:52:57.319446] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:13.973 [2024-11-16 18:52:57.319457] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:13.973 BaseBdev1 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.973 18:52:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.920 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.920 "name": "raid_bdev1", 00:12:14.920 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:14.920 "strip_size_kb": 0, 00:12:14.920 "state": "online", 00:12:14.920 "raid_level": "raid1", 00:12:14.920 "superblock": true, 00:12:14.920 "num_base_bdevs": 2, 00:12:14.920 "num_base_bdevs_discovered": 1, 00:12:14.920 "num_base_bdevs_operational": 1, 00:12:14.920 "base_bdevs_list": [ 00:12:14.920 { 00:12:14.920 "name": null, 00:12:14.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.920 "is_configured": false, 00:12:14.920 "data_offset": 0, 00:12:14.920 "data_size": 63488 00:12:14.920 }, 00:12:14.920 { 00:12:14.920 "name": "BaseBdev2", 00:12:14.920 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:14.920 "is_configured": true, 00:12:14.920 "data_offset": 2048, 00:12:14.920 "data_size": 63488 00:12:14.920 } 00:12:14.920 ] 00:12:14.921 }' 00:12:14.921 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.921 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:15.491 "name": "raid_bdev1", 00:12:15.491 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:15.491 "strip_size_kb": 0, 00:12:15.491 "state": "online", 00:12:15.491 "raid_level": "raid1", 00:12:15.491 "superblock": true, 00:12:15.491 "num_base_bdevs": 2, 00:12:15.491 "num_base_bdevs_discovered": 1, 00:12:15.491 "num_base_bdevs_operational": 1, 00:12:15.491 "base_bdevs_list": [ 00:12:15.491 { 00:12:15.491 "name": null, 00:12:15.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.491 "is_configured": false, 00:12:15.491 "data_offset": 0, 00:12:15.491 "data_size": 63488 00:12:15.491 }, 00:12:15.491 { 00:12:15.491 "name": "BaseBdev2", 00:12:15.491 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:15.491 "is_configured": true, 00:12:15.491 "data_offset": 2048, 00:12:15.491 "data_size": 63488 00:12:15.491 } 00:12:15.491 ] 00:12:15.491 }' 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:15.491 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.492 [2024-11-16 18:52:58.928029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.492 [2024-11-16 18:52:58.928250] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:15.492 [2024-11-16 18:52:58.928270] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:15.492 request: 00:12:15.492 { 00:12:15.492 "base_bdev": "BaseBdev1", 00:12:15.492 "raid_bdev": "raid_bdev1", 00:12:15.492 "method": "bdev_raid_add_base_bdev", 00:12:15.492 "req_id": 1 00:12:15.492 } 00:12:15.492 Got JSON-RPC error response 00:12:15.492 response: 00:12:15.492 { 00:12:15.492 "code": -22, 00:12:15.492 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:15.492 } 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:15.492 18:52:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.873 "name": "raid_bdev1", 00:12:16.873 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:16.873 "strip_size_kb": 0, 00:12:16.873 "state": "online", 00:12:16.873 "raid_level": "raid1", 00:12:16.873 "superblock": true, 00:12:16.873 "num_base_bdevs": 2, 00:12:16.873 "num_base_bdevs_discovered": 1, 00:12:16.873 "num_base_bdevs_operational": 1, 00:12:16.873 "base_bdevs_list": [ 00:12:16.873 { 00:12:16.873 "name": null, 00:12:16.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.873 "is_configured": false, 00:12:16.873 "data_offset": 0, 00:12:16.873 "data_size": 63488 00:12:16.873 }, 00:12:16.873 { 00:12:16.873 "name": "BaseBdev2", 00:12:16.873 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:16.873 "is_configured": true, 00:12:16.873 "data_offset": 2048, 00:12:16.873 "data_size": 63488 00:12:16.873 } 00:12:16.873 ] 00:12:16.873 }' 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.873 18:52:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:17.132 "name": "raid_bdev1", 00:12:17.132 "uuid": "edab98d7-ceba-45f7-8c04-690222b1f735", 00:12:17.132 "strip_size_kb": 0, 00:12:17.132 "state": "online", 00:12:17.132 "raid_level": "raid1", 00:12:17.132 "superblock": true, 00:12:17.132 "num_base_bdevs": 2, 00:12:17.132 "num_base_bdevs_discovered": 1, 00:12:17.132 "num_base_bdevs_operational": 1, 00:12:17.132 "base_bdevs_list": [ 00:12:17.132 { 00:12:17.132 "name": null, 00:12:17.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.132 "is_configured": false, 00:12:17.132 "data_offset": 0, 00:12:17.132 "data_size": 63488 00:12:17.132 }, 00:12:17.132 { 00:12:17.132 "name": "BaseBdev2", 00:12:17.132 "uuid": "4d6b4dc0-8996-578e-bd92-0853aa224b2c", 00:12:17.132 "is_configured": true, 00:12:17.132 "data_offset": 2048, 00:12:17.132 "data_size": 63488 00:12:17.132 } 00:12:17.132 ] 00:12:17.132 }' 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75476 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75476 ']' 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75476 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75476 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.132 killing process with pid 75476 00:12:17.132 Received shutdown signal, test time was about 60.000000 seconds 00:12:17.132 00:12:17.132 Latency(us) 00:12:17.132 [2024-11-16T18:53:00.604Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:17.132 [2024-11-16T18:53:00.604Z] =================================================================================================================== 00:12:17.132 [2024-11-16T18:53:00.604Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75476' 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75476 00:12:17.132 [2024-11-16 18:53:00.518384] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:17.132 [2024-11-16 18:53:00.518512] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.132 18:53:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75476 00:12:17.132 [2024-11-16 18:53:00.518565] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.132 [2024-11-16 18:53:00.518578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:17.391 [2024-11-16 18:53:00.823808] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.767 18:53:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:12:18.768 00:12:18.768 real 0m22.683s 00:12:18.768 user 0m27.900s 00:12:18.768 sys 0m3.531s 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:18.768 ************************************ 00:12:18.768 END TEST raid_rebuild_test_sb 00:12:18.768 ************************************ 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.768 18:53:01 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:12:18.768 18:53:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:18.768 18:53:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:18.768 18:53:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.768 ************************************ 00:12:18.768 START TEST raid_rebuild_test_io 00:12:18.768 ************************************ 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76200 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76200 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76200 ']' 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:18.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:18.768 18:53:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:18.768 [2024-11-16 18:53:02.062945] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:18.768 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:18.768 Zero copy mechanism will not be used. 00:12:18.768 [2024-11-16 18:53:02.063152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76200 ] 00:12:18.768 [2024-11-16 18:53:02.234417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.026 [2024-11-16 18:53:02.345266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.284 [2024-11-16 18:53:02.534755] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.284 [2024-11-16 18:53:02.534809] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.543 BaseBdev1_malloc 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.543 [2024-11-16 18:53:02.950820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:19.543 [2024-11-16 18:53:02.950890] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.543 [2024-11-16 18:53:02.950916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:19.543 [2024-11-16 18:53:02.950927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.543 [2024-11-16 18:53:02.953022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.543 [2024-11-16 18:53:02.953058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:19.543 BaseBdev1 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.543 BaseBdev2_malloc 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.543 18:53:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.543 [2024-11-16 18:53:03.002774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:19.543 [2024-11-16 18:53:03.002839] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.543 [2024-11-16 18:53:03.002860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:19.543 [2024-11-16 18:53:03.002870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.543 [2024-11-16 18:53:03.004919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.543 [2024-11-16 18:53:03.004955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:19.543 BaseBdev2 00:12:19.543 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.543 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:19.543 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.543 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.803 spare_malloc 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.803 spare_delay 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.803 [2024-11-16 18:53:03.081755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:19.803 [2024-11-16 18:53:03.081811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.803 [2024-11-16 18:53:03.081830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:19.803 [2024-11-16 18:53:03.081840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.803 [2024-11-16 18:53:03.083813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.803 [2024-11-16 18:53:03.083856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:19.803 spare 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.803 [2024-11-16 18:53:03.093792] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:19.803 [2024-11-16 18:53:03.095494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:19.803 [2024-11-16 18:53:03.095584] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:19.803 [2024-11-16 18:53:03.095597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:19.803 [2024-11-16 18:53:03.095847] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:19.803 [2024-11-16 18:53:03.095994] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:19.803 [2024-11-16 18:53:03.096009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:19.803 [2024-11-16 18:53:03.096137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.803 "name": "raid_bdev1", 00:12:19.803 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:19.803 "strip_size_kb": 0, 00:12:19.803 "state": "online", 00:12:19.803 "raid_level": "raid1", 00:12:19.803 "superblock": false, 00:12:19.803 "num_base_bdevs": 2, 00:12:19.803 "num_base_bdevs_discovered": 2, 00:12:19.803 "num_base_bdevs_operational": 2, 00:12:19.803 "base_bdevs_list": [ 00:12:19.803 { 00:12:19.803 "name": "BaseBdev1", 00:12:19.803 "uuid": "fc708f4d-07da-5d1c-bba5-4d3c2493fe6d", 00:12:19.803 "is_configured": true, 00:12:19.803 "data_offset": 0, 00:12:19.803 "data_size": 65536 00:12:19.803 }, 00:12:19.803 { 00:12:19.803 "name": "BaseBdev2", 00:12:19.803 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:19.803 "is_configured": true, 00:12:19.803 "data_offset": 0, 00:12:19.803 "data_size": 65536 00:12:19.803 } 00:12:19.803 ] 00:12:19.803 }' 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.803 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.062 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:20.062 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.062 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:20.062 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.062 [2024-11-16 18:53:03.529314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.321 [2024-11-16 18:53:03.620867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.321 "name": "raid_bdev1", 00:12:20.321 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:20.321 "strip_size_kb": 0, 00:12:20.321 "state": "online", 00:12:20.321 "raid_level": "raid1", 00:12:20.321 "superblock": false, 00:12:20.321 "num_base_bdevs": 2, 00:12:20.321 "num_base_bdevs_discovered": 1, 00:12:20.321 "num_base_bdevs_operational": 1, 00:12:20.321 "base_bdevs_list": [ 00:12:20.321 { 00:12:20.321 "name": null, 00:12:20.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:20.321 "is_configured": false, 00:12:20.321 "data_offset": 0, 00:12:20.321 "data_size": 65536 00:12:20.321 }, 00:12:20.321 { 00:12:20.321 "name": "BaseBdev2", 00:12:20.321 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:20.321 "is_configured": true, 00:12:20.321 "data_offset": 0, 00:12:20.321 "data_size": 65536 00:12:20.321 } 00:12:20.321 ] 00:12:20.321 }' 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.321 18:53:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.321 [2024-11-16 18:53:03.708411] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:20.321 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:20.321 Zero copy mechanism will not be used. 00:12:20.321 Running I/O for 60 seconds... 00:12:20.887 18:53:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:20.887 18:53:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.887 18:53:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:20.887 [2024-11-16 18:53:04.101766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:20.887 18:53:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.887 18:53:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:20.887 [2024-11-16 18:53:04.161155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:20.887 [2024-11-16 18:53:04.163073] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:20.887 [2024-11-16 18:53:04.269099] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:20.888 [2024-11-16 18:53:04.269506] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:21.148 [2024-11-16 18:53:04.476561] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:21.148 [2024-11-16 18:53:04.476837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:21.414 153.00 IOPS, 459.00 MiB/s [2024-11-16T18:53:04.886Z] [2024-11-16 18:53:04.830426] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:21.672 [2024-11-16 18:53:04.944073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:21.931 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:21.931 "name": "raid_bdev1", 00:12:21.931 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:21.931 "strip_size_kb": 0, 00:12:21.931 "state": "online", 00:12:21.931 "raid_level": "raid1", 00:12:21.931 "superblock": false, 00:12:21.931 "num_base_bdevs": 2, 00:12:21.931 "num_base_bdevs_discovered": 2, 00:12:21.931 "num_base_bdevs_operational": 2, 00:12:21.931 "process": { 00:12:21.931 "type": "rebuild", 00:12:21.931 "target": "spare", 00:12:21.931 "progress": { 00:12:21.931 "blocks": 12288, 00:12:21.931 "percent": 18 00:12:21.931 } 00:12:21.931 }, 00:12:21.931 "base_bdevs_list": [ 00:12:21.931 { 00:12:21.931 "name": "spare", 00:12:21.931 "uuid": "49c2ebf9-e156-5551-9c61-fec6ba12ba18", 00:12:21.931 "is_configured": true, 00:12:21.931 "data_offset": 0, 00:12:21.931 "data_size": 65536 00:12:21.931 }, 00:12:21.931 { 00:12:21.931 "name": "BaseBdev2", 00:12:21.931 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:21.931 "is_configured": true, 00:12:21.932 "data_offset": 0, 00:12:21.932 "data_size": 65536 00:12:21.932 } 00:12:21.932 ] 00:12:21.932 }' 00:12:21.932 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:21.932 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:21.932 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:21.932 [2024-11-16 18:53:05.273593] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:21.932 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:21.932 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:21.932 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:21.932 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:21.932 [2024-11-16 18:53:05.285588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:21.932 [2024-11-16 18:53:05.396092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:22.191 [2024-11-16 18:53:05.408228] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:22.191 [2024-11-16 18:53:05.410446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.191 [2024-11-16 18:53:05.410485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:22.191 [2024-11-16 18:53:05.410499] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:22.191 [2024-11-16 18:53:05.457871] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.191 "name": "raid_bdev1", 00:12:22.191 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:22.191 "strip_size_kb": 0, 00:12:22.191 "state": "online", 00:12:22.191 "raid_level": "raid1", 00:12:22.191 "superblock": false, 00:12:22.191 "num_base_bdevs": 2, 00:12:22.191 "num_base_bdevs_discovered": 1, 00:12:22.191 "num_base_bdevs_operational": 1, 00:12:22.191 "base_bdevs_list": [ 00:12:22.191 { 00:12:22.191 "name": null, 00:12:22.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.191 "is_configured": false, 00:12:22.191 "data_offset": 0, 00:12:22.191 "data_size": 65536 00:12:22.191 }, 00:12:22.191 { 00:12:22.191 "name": "BaseBdev2", 00:12:22.191 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:22.191 "is_configured": true, 00:12:22.191 "data_offset": 0, 00:12:22.191 "data_size": 65536 00:12:22.191 } 00:12:22.191 ] 00:12:22.191 }' 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.191 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.450 151.50 IOPS, 454.50 MiB/s [2024-11-16T18:53:05.922Z] 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:22.450 "name": "raid_bdev1", 00:12:22.450 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:22.450 "strip_size_kb": 0, 00:12:22.450 "state": "online", 00:12:22.450 "raid_level": "raid1", 00:12:22.450 "superblock": false, 00:12:22.450 "num_base_bdevs": 2, 00:12:22.450 "num_base_bdevs_discovered": 1, 00:12:22.450 "num_base_bdevs_operational": 1, 00:12:22.450 "base_bdevs_list": [ 00:12:22.450 { 00:12:22.450 "name": null, 00:12:22.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.450 "is_configured": false, 00:12:22.450 "data_offset": 0, 00:12:22.450 "data_size": 65536 00:12:22.450 }, 00:12:22.450 { 00:12:22.450 "name": "BaseBdev2", 00:12:22.450 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:22.450 "is_configured": true, 00:12:22.450 "data_offset": 0, 00:12:22.450 "data_size": 65536 00:12:22.450 } 00:12:22.450 ] 00:12:22.450 }' 00:12:22.450 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:22.709 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:22.709 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:22.709 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:22.709 18:53:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:22.709 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.709 18:53:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:22.709 [2024-11-16 18:53:06.003166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:22.709 18:53:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.709 18:53:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:22.709 [2024-11-16 18:53:06.043023] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:22.709 [2024-11-16 18:53:06.044920] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:22.709 [2024-11-16 18:53:06.168829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.709 [2024-11-16 18:53:06.169411] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:22.968 [2024-11-16 18:53:06.383507] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:22.968 [2024-11-16 18:53:06.383880] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:23.535 165.33 IOPS, 496.00 MiB/s [2024-11-16T18:53:07.007Z] [2024-11-16 18:53:06.822430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:23.535 [2024-11-16 18:53:06.822810] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:23.793 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.794 "name": "raid_bdev1", 00:12:23.794 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:23.794 "strip_size_kb": 0, 00:12:23.794 "state": "online", 00:12:23.794 "raid_level": "raid1", 00:12:23.794 "superblock": false, 00:12:23.794 "num_base_bdevs": 2, 00:12:23.794 "num_base_bdevs_discovered": 2, 00:12:23.794 "num_base_bdevs_operational": 2, 00:12:23.794 "process": { 00:12:23.794 "type": "rebuild", 00:12:23.794 "target": "spare", 00:12:23.794 "progress": { 00:12:23.794 "blocks": 10240, 00:12:23.794 "percent": 15 00:12:23.794 } 00:12:23.794 }, 00:12:23.794 "base_bdevs_list": [ 00:12:23.794 { 00:12:23.794 "name": "spare", 00:12:23.794 "uuid": "49c2ebf9-e156-5551-9c61-fec6ba12ba18", 00:12:23.794 "is_configured": true, 00:12:23.794 "data_offset": 0, 00:12:23.794 "data_size": 65536 00:12:23.794 }, 00:12:23.794 { 00:12:23.794 "name": "BaseBdev2", 00:12:23.794 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:23.794 "is_configured": true, 00:12:23.794 "data_offset": 0, 00:12:23.794 "data_size": 65536 00:12:23.794 } 00:12:23.794 ] 00:12:23.794 }' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=389 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.794 [2024-11-16 18:53:07.174887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:23.794 "name": "raid_bdev1", 00:12:23.794 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:23.794 "strip_size_kb": 0, 00:12:23.794 "state": "online", 00:12:23.794 "raid_level": "raid1", 00:12:23.794 "superblock": false, 00:12:23.794 "num_base_bdevs": 2, 00:12:23.794 "num_base_bdevs_discovered": 2, 00:12:23.794 "num_base_bdevs_operational": 2, 00:12:23.794 "process": { 00:12:23.794 "type": "rebuild", 00:12:23.794 "target": "spare", 00:12:23.794 "progress": { 00:12:23.794 "blocks": 12288, 00:12:23.794 "percent": 18 00:12:23.794 } 00:12:23.794 }, 00:12:23.794 "base_bdevs_list": [ 00:12:23.794 { 00:12:23.794 "name": "spare", 00:12:23.794 "uuid": "49c2ebf9-e156-5551-9c61-fec6ba12ba18", 00:12:23.794 "is_configured": true, 00:12:23.794 "data_offset": 0, 00:12:23.794 "data_size": 65536 00:12:23.794 }, 00:12:23.794 { 00:12:23.794 "name": "BaseBdev2", 00:12:23.794 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:23.794 "is_configured": true, 00:12:23.794 "data_offset": 0, 00:12:23.794 "data_size": 65536 00:12:23.794 } 00:12:23.794 ] 00:12:23.794 }' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:23.794 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:24.052 [2024-11-16 18:53:07.293804] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:24.052 [2024-11-16 18:53:07.294094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:24.052 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:24.052 18:53:07 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:24.310 [2024-11-16 18:53:07.617744] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:12:24.569 140.50 IOPS, 421.50 MiB/s [2024-11-16T18:53:08.041Z] [2024-11-16 18:53:07.831531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:24.827 [2024-11-16 18:53:08.079415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:24.827 [2024-11-16 18:53:08.206334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:25.086 "name": "raid_bdev1", 00:12:25.086 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:25.086 "strip_size_kb": 0, 00:12:25.086 "state": "online", 00:12:25.086 "raid_level": "raid1", 00:12:25.086 "superblock": false, 00:12:25.086 "num_base_bdevs": 2, 00:12:25.086 "num_base_bdevs_discovered": 2, 00:12:25.086 "num_base_bdevs_operational": 2, 00:12:25.086 "process": { 00:12:25.086 "type": "rebuild", 00:12:25.086 "target": "spare", 00:12:25.086 "progress": { 00:12:25.086 "blocks": 28672, 00:12:25.086 "percent": 43 00:12:25.086 } 00:12:25.086 }, 00:12:25.086 "base_bdevs_list": [ 00:12:25.086 { 00:12:25.086 "name": "spare", 00:12:25.086 "uuid": "49c2ebf9-e156-5551-9c61-fec6ba12ba18", 00:12:25.086 "is_configured": true, 00:12:25.086 "data_offset": 0, 00:12:25.086 "data_size": 65536 00:12:25.086 }, 00:12:25.086 { 00:12:25.086 "name": "BaseBdev2", 00:12:25.086 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:25.086 "is_configured": true, 00:12:25.086 "data_offset": 0, 00:12:25.086 "data_size": 65536 00:12:25.086 } 00:12:25.086 ] 00:12:25.086 }' 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:25.086 18:53:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:25.086 [2024-11-16 18:53:08.535130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:25.344 124.60 IOPS, 373.80 MiB/s [2024-11-16T18:53:08.817Z] [2024-11-16 18:53:08.760363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:26.297 [2024-11-16 18:53:09.428124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:26.297 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:26.297 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:26.297 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:26.297 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:26.297 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:26.297 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:26.298 "name": "raid_bdev1", 00:12:26.298 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:26.298 "strip_size_kb": 0, 00:12:26.298 "state": "online", 00:12:26.298 "raid_level": "raid1", 00:12:26.298 "superblock": false, 00:12:26.298 "num_base_bdevs": 2, 00:12:26.298 "num_base_bdevs_discovered": 2, 00:12:26.298 "num_base_bdevs_operational": 2, 00:12:26.298 "process": { 00:12:26.298 "type": "rebuild", 00:12:26.298 "target": "spare", 00:12:26.298 "progress": { 00:12:26.298 "blocks": 45056, 00:12:26.298 "percent": 68 00:12:26.298 } 00:12:26.298 }, 00:12:26.298 "base_bdevs_list": [ 00:12:26.298 { 00:12:26.298 "name": "spare", 00:12:26.298 "uuid": "49c2ebf9-e156-5551-9c61-fec6ba12ba18", 00:12:26.298 "is_configured": true, 00:12:26.298 "data_offset": 0, 00:12:26.298 "data_size": 65536 00:12:26.298 }, 00:12:26.298 { 00:12:26.298 "name": "BaseBdev2", 00:12:26.298 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:26.298 "is_configured": true, 00:12:26.298 "data_offset": 0, 00:12:26.298 "data_size": 65536 00:12:26.298 } 00:12:26.298 ] 00:12:26.298 }' 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:26.298 18:53:09 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:26.298 [2024-11-16 18:53:09.642971] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:26.556 110.50 IOPS, 331.50 MiB/s [2024-11-16T18:53:10.028Z] [2024-11-16 18:53:09.968695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:12:26.815 [2024-11-16 18:53:10.284041] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:27.382 [2024-11-16 18:53:10.609819] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:27.382 "name": "raid_bdev1", 00:12:27.382 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:27.382 "strip_size_kb": 0, 00:12:27.382 "state": "online", 00:12:27.382 "raid_level": "raid1", 00:12:27.382 "superblock": false, 00:12:27.382 "num_base_bdevs": 2, 00:12:27.382 "num_base_bdevs_discovered": 2, 00:12:27.382 "num_base_bdevs_operational": 2, 00:12:27.382 "process": { 00:12:27.382 "type": "rebuild", 00:12:27.382 "target": "spare", 00:12:27.382 "progress": { 00:12:27.382 "blocks": 65536, 00:12:27.382 "percent": 100 00:12:27.382 } 00:12:27.382 }, 00:12:27.382 "base_bdevs_list": [ 00:12:27.382 { 00:12:27.382 "name": "spare", 00:12:27.382 "uuid": "49c2ebf9-e156-5551-9c61-fec6ba12ba18", 00:12:27.382 "is_configured": true, 00:12:27.382 "data_offset": 0, 00:12:27.382 "data_size": 65536 00:12:27.382 }, 00:12:27.382 { 00:12:27.382 "name": "BaseBdev2", 00:12:27.382 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:27.382 "is_configured": true, 00:12:27.382 "data_offset": 0, 00:12:27.382 "data_size": 65536 00:12:27.382 } 00:12:27.382 ] 00:12:27.382 }' 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:27.382 98.71 IOPS, 296.14 MiB/s [2024-11-16T18:53:10.854Z] [2024-11-16 18:53:10.709681] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:27.382 [2024-11-16 18:53:10.711858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:27.382 18:53:10 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:28.340 91.12 IOPS, 273.38 MiB/s [2024-11-16T18:53:11.812Z] 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.340 18:53:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.599 "name": "raid_bdev1", 00:12:28.599 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:28.599 "strip_size_kb": 0, 00:12:28.599 "state": "online", 00:12:28.599 "raid_level": "raid1", 00:12:28.599 "superblock": false, 00:12:28.599 "num_base_bdevs": 2, 00:12:28.599 "num_base_bdevs_discovered": 2, 00:12:28.599 "num_base_bdevs_operational": 2, 00:12:28.599 "base_bdevs_list": [ 00:12:28.599 { 00:12:28.599 "name": "spare", 00:12:28.599 "uuid": "49c2ebf9-e156-5551-9c61-fec6ba12ba18", 00:12:28.599 "is_configured": true, 00:12:28.599 "data_offset": 0, 00:12:28.599 "data_size": 65536 00:12:28.599 }, 00:12:28.599 { 00:12:28.599 "name": "BaseBdev2", 00:12:28.599 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:28.599 "is_configured": true, 00:12:28.599 "data_offset": 0, 00:12:28.599 "data_size": 65536 00:12:28.599 } 00:12:28.599 ] 00:12:28.599 }' 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:28.599 "name": "raid_bdev1", 00:12:28.599 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:28.599 "strip_size_kb": 0, 00:12:28.599 "state": "online", 00:12:28.599 "raid_level": "raid1", 00:12:28.599 "superblock": false, 00:12:28.599 "num_base_bdevs": 2, 00:12:28.599 "num_base_bdevs_discovered": 2, 00:12:28.599 "num_base_bdevs_operational": 2, 00:12:28.599 "base_bdevs_list": [ 00:12:28.599 { 00:12:28.599 "name": "spare", 00:12:28.599 "uuid": "49c2ebf9-e156-5551-9c61-fec6ba12ba18", 00:12:28.599 "is_configured": true, 00:12:28.599 "data_offset": 0, 00:12:28.599 "data_size": 65536 00:12:28.599 }, 00:12:28.599 { 00:12:28.599 "name": "BaseBdev2", 00:12:28.599 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:28.599 "is_configured": true, 00:12:28.599 "data_offset": 0, 00:12:28.599 "data_size": 65536 00:12:28.599 } 00:12:28.599 ] 00:12:28.599 }' 00:12:28.599 18:53:11 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:28.599 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.857 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.857 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.857 "name": "raid_bdev1", 00:12:28.857 "uuid": "3610bf7c-e93c-44ad-9c13-b905c314c9f6", 00:12:28.857 "strip_size_kb": 0, 00:12:28.857 "state": "online", 00:12:28.857 "raid_level": "raid1", 00:12:28.857 "superblock": false, 00:12:28.857 "num_base_bdevs": 2, 00:12:28.857 "num_base_bdevs_discovered": 2, 00:12:28.857 "num_base_bdevs_operational": 2, 00:12:28.857 "base_bdevs_list": [ 00:12:28.857 { 00:12:28.857 "name": "spare", 00:12:28.857 "uuid": "49c2ebf9-e156-5551-9c61-fec6ba12ba18", 00:12:28.857 "is_configured": true, 00:12:28.857 "data_offset": 0, 00:12:28.857 "data_size": 65536 00:12:28.857 }, 00:12:28.857 { 00:12:28.857 "name": "BaseBdev2", 00:12:28.857 "uuid": "cfea147c-d6dc-59cb-8c11-d3b19b2338a7", 00:12:28.857 "is_configured": true, 00:12:28.857 "data_offset": 0, 00:12:28.857 "data_size": 65536 00:12:28.857 } 00:12:28.857 ] 00:12:28.857 }' 00:12:28.857 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.857 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.116 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:29.116 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.116 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.116 [2024-11-16 18:53:12.509221] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:29.116 [2024-11-16 18:53:12.509258] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:29.117 00:12:29.117 Latency(us) 00:12:29.117 [2024-11-16T18:53:12.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:29.117 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:29.117 raid_bdev1 : 8.84 85.86 257.59 0.00 0.00 16155.53 300.49 108978.64 00:12:29.117 [2024-11-16T18:53:12.589Z] =================================================================================================================== 00:12:29.117 [2024-11-16T18:53:12.589Z] Total : 85.86 257.59 0.00 0.00 16155.53 300.49 108978.64 00:12:29.117 [2024-11-16 18:53:12.554718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.117 [2024-11-16 18:53:12.554784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:29.117 [2024-11-16 18:53:12.554861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:29.117 [2024-11-16 18:53:12.554873] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:29.117 { 00:12:29.117 "results": [ 00:12:29.117 { 00:12:29.117 "job": "raid_bdev1", 00:12:29.117 "core_mask": "0x1", 00:12:29.117 "workload": "randrw", 00:12:29.117 "percentage": 50, 00:12:29.117 "status": "finished", 00:12:29.117 "queue_depth": 2, 00:12:29.117 "io_size": 3145728, 00:12:29.117 "runtime": 8.839745, 00:12:29.117 "iops": 85.86220530117102, 00:12:29.117 "mibps": 257.58661590351306, 00:12:29.117 "io_failed": 0, 00:12:29.117 "io_timeout": 0, 00:12:29.117 "avg_latency_us": 16155.530483110964, 00:12:29.117 "min_latency_us": 300.49257641921395, 00:12:29.117 "max_latency_us": 108978.64104803493 00:12:29.117 } 00:12:29.117 ], 00:12:29.117 "core_count": 1 00:12:29.117 } 00:12:29.117 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.117 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.117 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:29.117 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.117 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:29.117 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.375 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:29.375 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:29.375 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:29.376 /dev/nbd0 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.376 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.634 1+0 records in 00:12:29.634 1+0 records out 00:12:29.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463385 s, 8.8 MB/s 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.634 18:53:12 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:29.634 /dev/nbd1 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:29.634 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:29.892 1+0 records in 00:12:29.892 1+0 records out 00:12:29.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387301 s, 10.6 MB/s 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:29.892 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:30.150 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:30.408 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76200 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76200 ']' 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76200 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76200 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.409 killing process with pid 76200 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76200' 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76200 00:12:30.409 Received shutdown signal, test time was about 10.071270 seconds 00:12:30.409 00:12:30.409 Latency(us) 00:12:30.409 [2024-11-16T18:53:13.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.409 [2024-11-16T18:53:13.881Z] =================================================================================================================== 00:12:30.409 [2024-11-16T18:53:13.881Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:30.409 [2024-11-16 18:53:13.762259] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.409 18:53:13 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76200 00:12:30.667 [2024-11-16 18:53:13.985612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:32.043 00:12:32.043 real 0m13.136s 00:12:32.043 user 0m16.380s 00:12:32.043 sys 0m1.481s 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.043 ************************************ 00:12:32.043 END TEST raid_rebuild_test_io 00:12:32.043 ************************************ 00:12:32.043 18:53:15 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:12:32.043 18:53:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:32.043 18:53:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.043 18:53:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:32.043 ************************************ 00:12:32.043 START TEST raid_rebuild_test_sb_io 00:12:32.043 ************************************ 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76589 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76589 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76589 ']' 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.043 18:53:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.043 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:32.043 Zero copy mechanism will not be used. 00:12:32.043 [2024-11-16 18:53:15.278794] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:32.044 [2024-11-16 18:53:15.278895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76589 ] 00:12:32.044 [2024-11-16 18:53:15.453004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.302 [2024-11-16 18:53:15.558727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.302 [2024-11-16 18:53:15.753594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.302 [2024-11-16 18:53:15.753675] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.870 BaseBdev1_malloc 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.870 [2024-11-16 18:53:16.150962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:32.870 [2024-11-16 18:53:16.151029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.870 [2024-11-16 18:53:16.151052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:32.870 [2024-11-16 18:53:16.151063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.870 [2024-11-16 18:53:16.153147] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.870 [2024-11-16 18:53:16.153187] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.870 BaseBdev1 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.870 BaseBdev2_malloc 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.870 [2024-11-16 18:53:16.205399] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:32.870 [2024-11-16 18:53:16.205459] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.870 [2024-11-16 18:53:16.205478] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:32.870 [2024-11-16 18:53:16.205490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.870 [2024-11-16 18:53:16.207428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.870 [2024-11-16 18:53:16.207461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:32.870 BaseBdev2 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.870 spare_malloc 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.870 spare_delay 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.870 [2024-11-16 18:53:16.283784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:32.870 [2024-11-16 18:53:16.283834] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.870 [2024-11-16 18:53:16.283859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:32.870 [2024-11-16 18:53:16.283870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.870 [2024-11-16 18:53:16.285916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.870 [2024-11-16 18:53:16.285949] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:32.870 spare 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.870 [2024-11-16 18:53:16.295827] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:32.870 [2024-11-16 18:53:16.297567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:32.870 [2024-11-16 18:53:16.297750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:32.870 [2024-11-16 18:53:16.297774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.870 [2024-11-16 18:53:16.298003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:32.870 [2024-11-16 18:53:16.298178] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:32.870 [2024-11-16 18:53:16.298190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:32.870 [2024-11-16 18:53:16.298333] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:32.870 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.129 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.129 "name": "raid_bdev1", 00:12:33.129 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:33.129 "strip_size_kb": 0, 00:12:33.129 "state": "online", 00:12:33.129 "raid_level": "raid1", 00:12:33.129 "superblock": true, 00:12:33.129 "num_base_bdevs": 2, 00:12:33.129 "num_base_bdevs_discovered": 2, 00:12:33.129 "num_base_bdevs_operational": 2, 00:12:33.129 "base_bdevs_list": [ 00:12:33.129 { 00:12:33.129 "name": "BaseBdev1", 00:12:33.129 "uuid": "70e07561-e1e9-576a-aa49-681eafd7d6cd", 00:12:33.129 "is_configured": true, 00:12:33.129 "data_offset": 2048, 00:12:33.129 "data_size": 63488 00:12:33.129 }, 00:12:33.129 { 00:12:33.129 "name": "BaseBdev2", 00:12:33.129 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:33.129 "is_configured": true, 00:12:33.129 "data_offset": 2048, 00:12:33.129 "data_size": 63488 00:12:33.129 } 00:12:33.129 ] 00:12:33.129 }' 00:12:33.129 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.129 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:33.387 [2024-11-16 18:53:16.707423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.387 [2024-11-16 18:53:16.767020] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:33.387 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.388 "name": "raid_bdev1", 00:12:33.388 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:33.388 "strip_size_kb": 0, 00:12:33.388 "state": "online", 00:12:33.388 "raid_level": "raid1", 00:12:33.388 "superblock": true, 00:12:33.388 "num_base_bdevs": 2, 00:12:33.388 "num_base_bdevs_discovered": 1, 00:12:33.388 "num_base_bdevs_operational": 1, 00:12:33.388 "base_bdevs_list": [ 00:12:33.388 { 00:12:33.388 "name": null, 00:12:33.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:33.388 "is_configured": false, 00:12:33.388 "data_offset": 0, 00:12:33.388 "data_size": 63488 00:12:33.388 }, 00:12:33.388 { 00:12:33.388 "name": "BaseBdev2", 00:12:33.388 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:33.388 "is_configured": true, 00:12:33.388 "data_offset": 2048, 00:12:33.388 "data_size": 63488 00:12:33.388 } 00:12:33.388 ] 00:12:33.388 }' 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.388 18:53:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.645 [2024-11-16 18:53:16.866814] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:33.645 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:33.645 Zero copy mechanism will not be used. 00:12:33.645 Running I/O for 60 seconds... 00:12:33.904 18:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:33.904 18:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.904 18:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:33.904 [2024-11-16 18:53:17.190893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:33.904 18:53:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.904 18:53:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:33.904 [2024-11-16 18:53:17.263798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:33.904 [2024-11-16 18:53:17.265690] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:33.904 [2024-11-16 18:53:17.373731] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:33.904 [2024-11-16 18:53:17.374329] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:34.162 [2024-11-16 18:53:17.581624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:34.162 [2024-11-16 18:53:17.581911] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:12:34.680 196.00 IOPS, 588.00 MiB/s [2024-11-16T18:53:18.152Z] [2024-11-16 18:53:17.930429] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:34.680 [2024-11-16 18:53:17.931046] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:34.680 [2024-11-16 18:53:18.151707] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:34.680 [2024-11-16 18:53:18.152050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:34.939 "name": "raid_bdev1", 00:12:34.939 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:34.939 "strip_size_kb": 0, 00:12:34.939 "state": "online", 00:12:34.939 "raid_level": "raid1", 00:12:34.939 "superblock": true, 00:12:34.939 "num_base_bdevs": 2, 00:12:34.939 "num_base_bdevs_discovered": 2, 00:12:34.939 "num_base_bdevs_operational": 2, 00:12:34.939 "process": { 00:12:34.939 "type": "rebuild", 00:12:34.939 "target": "spare", 00:12:34.939 "progress": { 00:12:34.939 "blocks": 10240, 00:12:34.939 "percent": 16 00:12:34.939 } 00:12:34.939 }, 00:12:34.939 "base_bdevs_list": [ 00:12:34.939 { 00:12:34.939 "name": "spare", 00:12:34.939 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:34.939 "is_configured": true, 00:12:34.939 "data_offset": 2048, 00:12:34.939 "data_size": 63488 00:12:34.939 }, 00:12:34.939 { 00:12:34.939 "name": "BaseBdev2", 00:12:34.939 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:34.939 "is_configured": true, 00:12:34.939 "data_offset": 2048, 00:12:34.939 "data_size": 63488 00:12:34.939 } 00:12:34.939 ] 00:12:34.939 }' 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.939 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:34.939 [2024-11-16 18:53:18.380617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.212 [2024-11-16 18:53:18.595346] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:35.212 [2024-11-16 18:53:18.603105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.212 [2024-11-16 18:53:18.603144] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:35.212 [2024-11-16 18:53:18.603158] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:35.212 [2024-11-16 18:53:18.649892] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.212 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.213 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.213 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.213 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.213 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.475 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.475 "name": "raid_bdev1", 00:12:35.475 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:35.475 "strip_size_kb": 0, 00:12:35.475 "state": "online", 00:12:35.475 "raid_level": "raid1", 00:12:35.475 "superblock": true, 00:12:35.475 "num_base_bdevs": 2, 00:12:35.475 "num_base_bdevs_discovered": 1, 00:12:35.475 "num_base_bdevs_operational": 1, 00:12:35.475 "base_bdevs_list": [ 00:12:35.475 { 00:12:35.475 "name": null, 00:12:35.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.475 "is_configured": false, 00:12:35.475 "data_offset": 0, 00:12:35.475 "data_size": 63488 00:12:35.475 }, 00:12:35.475 { 00:12:35.475 "name": "BaseBdev2", 00:12:35.475 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:35.475 "is_configured": true, 00:12:35.475 "data_offset": 2048, 00:12:35.475 "data_size": 63488 00:12:35.475 } 00:12:35.475 ] 00:12:35.475 }' 00:12:35.475 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.475 18:53:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.738 141.50 IOPS, 424.50 MiB/s [2024-11-16T18:53:19.210Z] 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:35.738 "name": "raid_bdev1", 00:12:35.738 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:35.738 "strip_size_kb": 0, 00:12:35.738 "state": "online", 00:12:35.738 "raid_level": "raid1", 00:12:35.738 "superblock": true, 00:12:35.738 "num_base_bdevs": 2, 00:12:35.738 "num_base_bdevs_discovered": 1, 00:12:35.738 "num_base_bdevs_operational": 1, 00:12:35.738 "base_bdevs_list": [ 00:12:35.738 { 00:12:35.738 "name": null, 00:12:35.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.738 "is_configured": false, 00:12:35.738 "data_offset": 0, 00:12:35.738 "data_size": 63488 00:12:35.738 }, 00:12:35.738 { 00:12:35.738 "name": "BaseBdev2", 00:12:35.738 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:35.738 "is_configured": true, 00:12:35.738 "data_offset": 2048, 00:12:35.738 "data_size": 63488 00:12:35.738 } 00:12:35.738 ] 00:12:35.738 }' 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:35.738 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.004 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:36.004 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:36.004 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.004 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.004 [2024-11-16 18:53:19.243796] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.004 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.004 18:53:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:36.004 [2024-11-16 18:53:19.309114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:12:36.004 [2024-11-16 18:53:19.311099] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.004 [2024-11-16 18:53:19.423025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:36.004 [2024-11-16 18:53:19.423579] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:12:36.609 154.00 IOPS, 462.00 MiB/s [2024-11-16T18:53:20.082Z] [2024-11-16 18:53:19.907107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:12:36.869 [2024-11-16 18:53:20.120349] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:36.869 [2024-11-16 18:53:20.120595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:36.869 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.127 "name": "raid_bdev1", 00:12:37.127 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:37.127 "strip_size_kb": 0, 00:12:37.127 "state": "online", 00:12:37.127 "raid_level": "raid1", 00:12:37.127 "superblock": true, 00:12:37.127 "num_base_bdevs": 2, 00:12:37.127 "num_base_bdevs_discovered": 2, 00:12:37.127 "num_base_bdevs_operational": 2, 00:12:37.127 "process": { 00:12:37.127 "type": "rebuild", 00:12:37.127 "target": "spare", 00:12:37.127 "progress": { 00:12:37.127 "blocks": 10240, 00:12:37.127 "percent": 16 00:12:37.127 } 00:12:37.127 }, 00:12:37.127 "base_bdevs_list": [ 00:12:37.127 { 00:12:37.127 "name": "spare", 00:12:37.127 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:37.127 "is_configured": true, 00:12:37.127 "data_offset": 2048, 00:12:37.127 "data_size": 63488 00:12:37.127 }, 00:12:37.127 { 00:12:37.127 "name": "BaseBdev2", 00:12:37.127 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:37.127 "is_configured": true, 00:12:37.127 "data_offset": 2048, 00:12:37.127 "data_size": 63488 00:12:37.127 } 00:12:37.127 ] 00:12:37.127 }' 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.127 [2024-11-16 18:53:20.440874] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:37.127 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=402 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.127 "name": "raid_bdev1", 00:12:37.127 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:37.127 "strip_size_kb": 0, 00:12:37.127 "state": "online", 00:12:37.127 "raid_level": "raid1", 00:12:37.127 "superblock": true, 00:12:37.127 "num_base_bdevs": 2, 00:12:37.127 "num_base_bdevs_discovered": 2, 00:12:37.127 "num_base_bdevs_operational": 2, 00:12:37.127 "process": { 00:12:37.127 "type": "rebuild", 00:12:37.127 "target": "spare", 00:12:37.127 "progress": { 00:12:37.127 "blocks": 14336, 00:12:37.127 "percent": 22 00:12:37.127 } 00:12:37.127 }, 00:12:37.127 "base_bdevs_list": [ 00:12:37.127 { 00:12:37.127 "name": "spare", 00:12:37.127 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:37.127 "is_configured": true, 00:12:37.127 "data_offset": 2048, 00:12:37.127 "data_size": 63488 00:12:37.127 }, 00:12:37.127 { 00:12:37.127 "name": "BaseBdev2", 00:12:37.127 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:37.127 "is_configured": true, 00:12:37.127 "data_offset": 2048, 00:12:37.127 "data_size": 63488 00:12:37.127 } 00:12:37.127 ] 00:12:37.127 }' 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:37.127 18:53:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:37.386 [2024-11-16 18:53:20.656724] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:37.386 [2024-11-16 18:53:20.657037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:12:37.646 136.75 IOPS, 410.25 MiB/s [2024-11-16T18:53:21.118Z] [2024-11-16 18:53:21.046875] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:37.646 [2024-11-16 18:53:21.047214] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:12:37.904 [2024-11-16 18:53:21.281624] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.162 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.162 "name": "raid_bdev1", 00:12:38.162 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:38.162 "strip_size_kb": 0, 00:12:38.162 "state": "online", 00:12:38.162 "raid_level": "raid1", 00:12:38.162 "superblock": true, 00:12:38.162 "num_base_bdevs": 2, 00:12:38.162 "num_base_bdevs_discovered": 2, 00:12:38.162 "num_base_bdevs_operational": 2, 00:12:38.163 "process": { 00:12:38.163 "type": "rebuild", 00:12:38.163 "target": "spare", 00:12:38.163 "progress": { 00:12:38.163 "blocks": 30720, 00:12:38.163 "percent": 48 00:12:38.163 } 00:12:38.163 }, 00:12:38.163 "base_bdevs_list": [ 00:12:38.163 { 00:12:38.163 "name": "spare", 00:12:38.163 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:38.163 "is_configured": true, 00:12:38.163 "data_offset": 2048, 00:12:38.163 "data_size": 63488 00:12:38.163 }, 00:12:38.163 { 00:12:38.163 "name": "BaseBdev2", 00:12:38.163 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:38.163 "is_configured": true, 00:12:38.163 "data_offset": 2048, 00:12:38.163 "data_size": 63488 00:12:38.163 } 00:12:38.163 ] 00:12:38.163 }' 00:12:38.163 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.163 [2024-11-16 18:53:21.634029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:38.421 [2024-11-16 18:53:21.634581] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:12:38.421 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.421 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.421 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.421 18:53:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:38.421 [2024-11-16 18:53:21.743078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:12:38.988 126.20 IOPS, 378.60 MiB/s [2024-11-16T18:53:22.460Z] [2024-11-16 18:53:22.401196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:12:39.246 [2024-11-16 18:53:22.504461] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:12:39.246 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.246 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.246 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.246 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.246 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.246 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:39.505 "name": "raid_bdev1", 00:12:39.505 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:39.505 "strip_size_kb": 0, 00:12:39.505 "state": "online", 00:12:39.505 "raid_level": "raid1", 00:12:39.505 "superblock": true, 00:12:39.505 "num_base_bdevs": 2, 00:12:39.505 "num_base_bdevs_discovered": 2, 00:12:39.505 "num_base_bdevs_operational": 2, 00:12:39.505 "process": { 00:12:39.505 "type": "rebuild", 00:12:39.505 "target": "spare", 00:12:39.505 "progress": { 00:12:39.505 "blocks": 51200, 00:12:39.505 "percent": 80 00:12:39.505 } 00:12:39.505 }, 00:12:39.505 "base_bdevs_list": [ 00:12:39.505 { 00:12:39.505 "name": "spare", 00:12:39.505 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:39.505 "is_configured": true, 00:12:39.505 "data_offset": 2048, 00:12:39.505 "data_size": 63488 00:12:39.505 }, 00:12:39.505 { 00:12:39.505 "name": "BaseBdev2", 00:12:39.505 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:39.505 "is_configured": true, 00:12:39.505 "data_offset": 2048, 00:12:39.505 "data_size": 63488 00:12:39.505 } 00:12:39.505 ] 00:12:39.505 }' 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:39.505 18:53:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:40.071 114.33 IOPS, 343.00 MiB/s [2024-11-16T18:53:23.543Z] [2024-11-16 18:53:23.313260] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:40.071 [2024-11-16 18:53:23.369756] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:40.071 [2024-11-16 18:53:23.371244] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.639 103.00 IOPS, 309.00 MiB/s [2024-11-16T18:53:24.111Z] 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.639 "name": "raid_bdev1", 00:12:40.639 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:40.639 "strip_size_kb": 0, 00:12:40.639 "state": "online", 00:12:40.639 "raid_level": "raid1", 00:12:40.639 "superblock": true, 00:12:40.639 "num_base_bdevs": 2, 00:12:40.639 "num_base_bdevs_discovered": 2, 00:12:40.639 "num_base_bdevs_operational": 2, 00:12:40.639 "base_bdevs_list": [ 00:12:40.639 { 00:12:40.639 "name": "spare", 00:12:40.639 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:40.639 "is_configured": true, 00:12:40.639 "data_offset": 2048, 00:12:40.639 "data_size": 63488 00:12:40.639 }, 00:12:40.639 { 00:12:40.639 "name": "BaseBdev2", 00:12:40.639 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:40.639 "is_configured": true, 00:12:40.639 "data_offset": 2048, 00:12:40.639 "data_size": 63488 00:12:40.639 } 00:12:40.639 ] 00:12:40.639 }' 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.639 18:53:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.639 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.639 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.639 "name": "raid_bdev1", 00:12:40.639 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:40.639 "strip_size_kb": 0, 00:12:40.639 "state": "online", 00:12:40.639 "raid_level": "raid1", 00:12:40.639 "superblock": true, 00:12:40.639 "num_base_bdevs": 2, 00:12:40.639 "num_base_bdevs_discovered": 2, 00:12:40.639 "num_base_bdevs_operational": 2, 00:12:40.639 "base_bdevs_list": [ 00:12:40.639 { 00:12:40.639 "name": "spare", 00:12:40.639 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:40.639 "is_configured": true, 00:12:40.639 "data_offset": 2048, 00:12:40.639 "data_size": 63488 00:12:40.639 }, 00:12:40.639 { 00:12:40.639 "name": "BaseBdev2", 00:12:40.639 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:40.639 "is_configured": true, 00:12:40.639 "data_offset": 2048, 00:12:40.639 "data_size": 63488 00:12:40.639 } 00:12:40.639 ] 00:12:40.639 }' 00:12:40.639 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.639 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:40.639 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.898 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.899 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.899 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:40.899 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.899 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:40.899 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.899 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.899 "name": "raid_bdev1", 00:12:40.899 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:40.899 "strip_size_kb": 0, 00:12:40.899 "state": "online", 00:12:40.899 "raid_level": "raid1", 00:12:40.899 "superblock": true, 00:12:40.899 "num_base_bdevs": 2, 00:12:40.899 "num_base_bdevs_discovered": 2, 00:12:40.899 "num_base_bdevs_operational": 2, 00:12:40.899 "base_bdevs_list": [ 00:12:40.899 { 00:12:40.899 "name": "spare", 00:12:40.899 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:40.899 "is_configured": true, 00:12:40.899 "data_offset": 2048, 00:12:40.899 "data_size": 63488 00:12:40.899 }, 00:12:40.899 { 00:12:40.899 "name": "BaseBdev2", 00:12:40.899 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:40.899 "is_configured": true, 00:12:40.899 "data_offset": 2048, 00:12:40.899 "data_size": 63488 00:12:40.899 } 00:12:40.899 ] 00:12:40.899 }' 00:12:40.899 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.899 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.157 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.157 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.157 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.157 [2024-11-16 18:53:24.518299] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.157 [2024-11-16 18:53:24.518394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.157 00:12:41.157 Latency(us) 00:12:41.157 [2024-11-16T18:53:24.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:41.157 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:12:41.157 raid_bdev1 : 7.74 96.77 290.31 0.00 0.00 13928.29 316.59 137368.03 00:12:41.157 [2024-11-16T18:53:24.629Z] =================================================================================================================== 00:12:41.157 [2024-11-16T18:53:24.629Z] Total : 96.77 290.31 0.00 0.00 13928.29 316.59 137368.03 00:12:41.157 [2024-11-16 18:53:24.615260] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.157 [2024-11-16 18:53:24.615358] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.157 [2024-11-16 18:53:24.615459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.157 [2024-11-16 18:53:24.615552] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:41.157 { 00:12:41.157 "results": [ 00:12:41.157 { 00:12:41.157 "job": "raid_bdev1", 00:12:41.157 "core_mask": "0x1", 00:12:41.157 "workload": "randrw", 00:12:41.157 "percentage": 50, 00:12:41.157 "status": "finished", 00:12:41.157 "queue_depth": 2, 00:12:41.157 "io_size": 3145728, 00:12:41.157 "runtime": 7.740077, 00:12:41.157 "iops": 96.76906315014696, 00:12:41.157 "mibps": 290.3071894504409, 00:12:41.157 "io_failed": 0, 00:12:41.157 "io_timeout": 0, 00:12:41.157 "avg_latency_us": 13928.291332256691, 00:12:41.157 "min_latency_us": 316.5903930131004, 00:12:41.157 "max_latency_us": 137368.03493449782 00:12:41.157 } 00:12:41.157 ], 00:12:41.157 "core_count": 1 00:12:41.157 } 00:12:41.158 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.158 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.158 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:12:41.158 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.158 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:12:41.417 /dev/nbd0 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.417 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.677 1+0 records in 00:12:41.677 1+0 records out 00:12:41.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332147 s, 12.3 MB/s 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.677 18:53:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:12:41.677 /dev/nbd1 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:41.677 1+0 records in 00:12:41.677 1+0 records out 00:12:41.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313989 s, 13.0 MB/s 00:12:41.677 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.935 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:41.936 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.194 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.455 [2024-11-16 18:53:25.790482] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:42.455 [2024-11-16 18:53:25.790545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:42.455 [2024-11-16 18:53:25.790569] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:12:42.455 [2024-11-16 18:53:25.790578] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:42.455 [2024-11-16 18:53:25.792792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:42.455 [2024-11-16 18:53:25.792831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:42.455 [2024-11-16 18:53:25.792918] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:42.455 [2024-11-16 18:53:25.792965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:42.455 [2024-11-16 18:53:25.793094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.455 spare 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.455 [2024-11-16 18:53:25.892999] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:42.455 [2024-11-16 18:53:25.893041] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:42.455 [2024-11-16 18:53:25.893379] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:12:42.455 [2024-11-16 18:53:25.893569] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:42.455 [2024-11-16 18:53:25.893579] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:42.455 [2024-11-16 18:53:25.893830] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.455 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.714 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.714 "name": "raid_bdev1", 00:12:42.714 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:42.714 "strip_size_kb": 0, 00:12:42.714 "state": "online", 00:12:42.714 "raid_level": "raid1", 00:12:42.714 "superblock": true, 00:12:42.714 "num_base_bdevs": 2, 00:12:42.714 "num_base_bdevs_discovered": 2, 00:12:42.714 "num_base_bdevs_operational": 2, 00:12:42.714 "base_bdevs_list": [ 00:12:42.714 { 00:12:42.714 "name": "spare", 00:12:42.714 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:42.714 "is_configured": true, 00:12:42.714 "data_offset": 2048, 00:12:42.714 "data_size": 63488 00:12:42.714 }, 00:12:42.714 { 00:12:42.714 "name": "BaseBdev2", 00:12:42.714 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:42.714 "is_configured": true, 00:12:42.714 "data_offset": 2048, 00:12:42.714 "data_size": 63488 00:12:42.714 } 00:12:42.714 ] 00:12:42.714 }' 00:12:42.714 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.714 18:53:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:42.973 "name": "raid_bdev1", 00:12:42.973 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:42.973 "strip_size_kb": 0, 00:12:42.973 "state": "online", 00:12:42.973 "raid_level": "raid1", 00:12:42.973 "superblock": true, 00:12:42.973 "num_base_bdevs": 2, 00:12:42.973 "num_base_bdevs_discovered": 2, 00:12:42.973 "num_base_bdevs_operational": 2, 00:12:42.973 "base_bdevs_list": [ 00:12:42.973 { 00:12:42.973 "name": "spare", 00:12:42.973 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:42.973 "is_configured": true, 00:12:42.973 "data_offset": 2048, 00:12:42.973 "data_size": 63488 00:12:42.973 }, 00:12:42.973 { 00:12:42.973 "name": "BaseBdev2", 00:12:42.973 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:42.973 "is_configured": true, 00:12:42.973 "data_offset": 2048, 00:12:42.973 "data_size": 63488 00:12:42.973 } 00:12:42.973 ] 00:12:42.973 }' 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.973 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:42.973 [2024-11-16 18:53:26.441503] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.232 "name": "raid_bdev1", 00:12:43.232 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:43.232 "strip_size_kb": 0, 00:12:43.232 "state": "online", 00:12:43.232 "raid_level": "raid1", 00:12:43.232 "superblock": true, 00:12:43.232 "num_base_bdevs": 2, 00:12:43.232 "num_base_bdevs_discovered": 1, 00:12:43.232 "num_base_bdevs_operational": 1, 00:12:43.232 "base_bdevs_list": [ 00:12:43.232 { 00:12:43.232 "name": null, 00:12:43.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.232 "is_configured": false, 00:12:43.232 "data_offset": 0, 00:12:43.232 "data_size": 63488 00:12:43.232 }, 00:12:43.232 { 00:12:43.232 "name": "BaseBdev2", 00:12:43.232 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:43.232 "is_configured": true, 00:12:43.232 "data_offset": 2048, 00:12:43.232 "data_size": 63488 00:12:43.232 } 00:12:43.232 ] 00:12:43.232 }' 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.232 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.491 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:43.491 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.491 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:43.491 [2024-11-16 18:53:26.952752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.491 [2024-11-16 18:53:26.953043] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:43.491 [2024-11-16 18:53:26.953116] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:43.491 [2024-11-16 18:53:26.953187] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:43.749 [2024-11-16 18:53:26.969222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:12:43.749 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.749 18:53:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:43.749 [2024-11-16 18:53:26.971211] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:44.684 18:53:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:44.684 18:53:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:44.684 18:53:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:44.684 18:53:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:44.684 18:53:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:44.684 18:53:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.684 18:53:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.684 18:53:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.684 18:53:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.684 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.684 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:44.684 "name": "raid_bdev1", 00:12:44.684 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:44.684 "strip_size_kb": 0, 00:12:44.684 "state": "online", 00:12:44.684 "raid_level": "raid1", 00:12:44.684 "superblock": true, 00:12:44.684 "num_base_bdevs": 2, 00:12:44.684 "num_base_bdevs_discovered": 2, 00:12:44.684 "num_base_bdevs_operational": 2, 00:12:44.684 "process": { 00:12:44.684 "type": "rebuild", 00:12:44.684 "target": "spare", 00:12:44.684 "progress": { 00:12:44.684 "blocks": 20480, 00:12:44.684 "percent": 32 00:12:44.684 } 00:12:44.684 }, 00:12:44.684 "base_bdevs_list": [ 00:12:44.684 { 00:12:44.684 "name": "spare", 00:12:44.684 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:44.684 "is_configured": true, 00:12:44.684 "data_offset": 2048, 00:12:44.684 "data_size": 63488 00:12:44.684 }, 00:12:44.684 { 00:12:44.684 "name": "BaseBdev2", 00:12:44.684 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:44.684 "is_configured": true, 00:12:44.684 "data_offset": 2048, 00:12:44.684 "data_size": 63488 00:12:44.684 } 00:12:44.684 ] 00:12:44.684 }' 00:12:44.684 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:44.684 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:44.684 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:44.684 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:44.684 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:12:44.684 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.684 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.684 [2024-11-16 18:53:28.118571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.942 [2024-11-16 18:53:28.176531] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:44.942 [2024-11-16 18:53:28.176614] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:44.942 [2024-11-16 18:53:28.176629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:44.942 [2024-11-16 18:53:28.176638] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:44.942 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.942 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:44.942 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:44.942 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:44.942 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:44.942 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.943 "name": "raid_bdev1", 00:12:44.943 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:44.943 "strip_size_kb": 0, 00:12:44.943 "state": "online", 00:12:44.943 "raid_level": "raid1", 00:12:44.943 "superblock": true, 00:12:44.943 "num_base_bdevs": 2, 00:12:44.943 "num_base_bdevs_discovered": 1, 00:12:44.943 "num_base_bdevs_operational": 1, 00:12:44.943 "base_bdevs_list": [ 00:12:44.943 { 00:12:44.943 "name": null, 00:12:44.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.943 "is_configured": false, 00:12:44.943 "data_offset": 0, 00:12:44.943 "data_size": 63488 00:12:44.943 }, 00:12:44.943 { 00:12:44.943 "name": "BaseBdev2", 00:12:44.943 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:44.943 "is_configured": true, 00:12:44.943 "data_offset": 2048, 00:12:44.943 "data_size": 63488 00:12:44.943 } 00:12:44.943 ] 00:12:44.943 }' 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.943 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.202 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.202 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.202 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:45.202 [2024-11-16 18:53:28.669402] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.202 [2024-11-16 18:53:28.669553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.202 [2024-11-16 18:53:28.669599] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:45.202 [2024-11-16 18:53:28.669634] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.202 [2024-11-16 18:53:28.670148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.202 [2024-11-16 18:53:28.670214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.202 [2024-11-16 18:53:28.670345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:45.202 [2024-11-16 18:53:28.670395] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:45.202 [2024-11-16 18:53:28.670438] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:45.202 [2024-11-16 18:53:28.670514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:45.461 [2024-11-16 18:53:28.686520] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:12:45.461 spare 00:12:45.461 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.461 [2024-11-16 18:53:28.688452] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:45.461 18:53:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:46.397 "name": "raid_bdev1", 00:12:46.397 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:46.397 "strip_size_kb": 0, 00:12:46.397 "state": "online", 00:12:46.397 "raid_level": "raid1", 00:12:46.397 "superblock": true, 00:12:46.397 "num_base_bdevs": 2, 00:12:46.397 "num_base_bdevs_discovered": 2, 00:12:46.397 "num_base_bdevs_operational": 2, 00:12:46.397 "process": { 00:12:46.397 "type": "rebuild", 00:12:46.397 "target": "spare", 00:12:46.397 "progress": { 00:12:46.397 "blocks": 20480, 00:12:46.397 "percent": 32 00:12:46.397 } 00:12:46.397 }, 00:12:46.397 "base_bdevs_list": [ 00:12:46.397 { 00:12:46.397 "name": "spare", 00:12:46.397 "uuid": "651284cb-19f2-51a1-a04f-ed35aee8f664", 00:12:46.397 "is_configured": true, 00:12:46.397 "data_offset": 2048, 00:12:46.397 "data_size": 63488 00:12:46.397 }, 00:12:46.397 { 00:12:46.397 "name": "BaseBdev2", 00:12:46.397 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:46.397 "is_configured": true, 00:12:46.397 "data_offset": 2048, 00:12:46.397 "data_size": 63488 00:12:46.397 } 00:12:46.397 ] 00:12:46.397 }' 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.397 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.397 [2024-11-16 18:53:29.851796] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.657 [2024-11-16 18:53:29.893702] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:46.657 [2024-11-16 18:53:29.893831] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.657 [2024-11-16 18:53:29.893853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:46.657 [2024-11-16 18:53:29.893861] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.657 "name": "raid_bdev1", 00:12:46.657 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:46.657 "strip_size_kb": 0, 00:12:46.657 "state": "online", 00:12:46.657 "raid_level": "raid1", 00:12:46.657 "superblock": true, 00:12:46.657 "num_base_bdevs": 2, 00:12:46.657 "num_base_bdevs_discovered": 1, 00:12:46.657 "num_base_bdevs_operational": 1, 00:12:46.657 "base_bdevs_list": [ 00:12:46.657 { 00:12:46.657 "name": null, 00:12:46.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.657 "is_configured": false, 00:12:46.657 "data_offset": 0, 00:12:46.657 "data_size": 63488 00:12:46.657 }, 00:12:46.657 { 00:12:46.657 "name": "BaseBdev2", 00:12:46.657 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:46.657 "is_configured": true, 00:12:46.657 "data_offset": 2048, 00:12:46.657 "data_size": 63488 00:12:46.657 } 00:12:46.657 ] 00:12:46.657 }' 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.657 18:53:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.915 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:47.174 "name": "raid_bdev1", 00:12:47.174 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:47.174 "strip_size_kb": 0, 00:12:47.174 "state": "online", 00:12:47.174 "raid_level": "raid1", 00:12:47.174 "superblock": true, 00:12:47.174 "num_base_bdevs": 2, 00:12:47.174 "num_base_bdevs_discovered": 1, 00:12:47.174 "num_base_bdevs_operational": 1, 00:12:47.174 "base_bdevs_list": [ 00:12:47.174 { 00:12:47.174 "name": null, 00:12:47.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.174 "is_configured": false, 00:12:47.174 "data_offset": 0, 00:12:47.174 "data_size": 63488 00:12:47.174 }, 00:12:47.174 { 00:12:47.174 "name": "BaseBdev2", 00:12:47.174 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:47.174 "is_configured": true, 00:12:47.174 "data_offset": 2048, 00:12:47.174 "data_size": 63488 00:12:47.174 } 00:12:47.174 ] 00:12:47.174 }' 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:47.174 [2024-11-16 18:53:30.539974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:47.174 [2024-11-16 18:53:30.540032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.174 [2024-11-16 18:53:30.540054] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:47.174 [2024-11-16 18:53:30.540063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.174 [2024-11-16 18:53:30.540508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.174 [2024-11-16 18:53:30.540525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:47.174 [2024-11-16 18:53:30.540605] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:12:47.174 [2024-11-16 18:53:30.540621] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:47.174 [2024-11-16 18:53:30.540631] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:47.174 [2024-11-16 18:53:30.540667] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:12:47.174 BaseBdev1 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.174 18:53:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.113 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.372 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.372 "name": "raid_bdev1", 00:12:48.372 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:48.372 "strip_size_kb": 0, 00:12:48.372 "state": "online", 00:12:48.372 "raid_level": "raid1", 00:12:48.372 "superblock": true, 00:12:48.372 "num_base_bdevs": 2, 00:12:48.372 "num_base_bdevs_discovered": 1, 00:12:48.372 "num_base_bdevs_operational": 1, 00:12:48.372 "base_bdevs_list": [ 00:12:48.372 { 00:12:48.372 "name": null, 00:12:48.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.372 "is_configured": false, 00:12:48.372 "data_offset": 0, 00:12:48.372 "data_size": 63488 00:12:48.372 }, 00:12:48.372 { 00:12:48.372 "name": "BaseBdev2", 00:12:48.372 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:48.372 "is_configured": true, 00:12:48.372 "data_offset": 2048, 00:12:48.372 "data_size": 63488 00:12:48.372 } 00:12:48.372 ] 00:12:48.372 }' 00:12:48.372 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.372 18:53:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:48.631 "name": "raid_bdev1", 00:12:48.631 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:48.631 "strip_size_kb": 0, 00:12:48.631 "state": "online", 00:12:48.631 "raid_level": "raid1", 00:12:48.631 "superblock": true, 00:12:48.631 "num_base_bdevs": 2, 00:12:48.631 "num_base_bdevs_discovered": 1, 00:12:48.631 "num_base_bdevs_operational": 1, 00:12:48.631 "base_bdevs_list": [ 00:12:48.631 { 00:12:48.631 "name": null, 00:12:48.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.631 "is_configured": false, 00:12:48.631 "data_offset": 0, 00:12:48.631 "data_size": 63488 00:12:48.631 }, 00:12:48.631 { 00:12:48.631 "name": "BaseBdev2", 00:12:48.631 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:48.631 "is_configured": true, 00:12:48.631 "data_offset": 2048, 00:12:48.631 "data_size": 63488 00:12:48.631 } 00:12:48.631 ] 00:12:48.631 }' 00:12:48.631 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:48.890 [2024-11-16 18:53:32.169397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.890 [2024-11-16 18:53:32.169626] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:12:48.890 [2024-11-16 18:53:32.169701] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:12:48.890 request: 00:12:48.890 { 00:12:48.890 "base_bdev": "BaseBdev1", 00:12:48.890 "raid_bdev": "raid_bdev1", 00:12:48.890 "method": "bdev_raid_add_base_bdev", 00:12:48.890 "req_id": 1 00:12:48.890 } 00:12:48.890 Got JSON-RPC error response 00:12:48.890 response: 00:12:48.890 { 00:12:48.890 "code": -22, 00:12:48.890 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:12:48.890 } 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:48.890 18:53:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.833 "name": "raid_bdev1", 00:12:49.833 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:49.833 "strip_size_kb": 0, 00:12:49.833 "state": "online", 00:12:49.833 "raid_level": "raid1", 00:12:49.833 "superblock": true, 00:12:49.833 "num_base_bdevs": 2, 00:12:49.833 "num_base_bdevs_discovered": 1, 00:12:49.833 "num_base_bdevs_operational": 1, 00:12:49.833 "base_bdevs_list": [ 00:12:49.833 { 00:12:49.833 "name": null, 00:12:49.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.833 "is_configured": false, 00:12:49.833 "data_offset": 0, 00:12:49.833 "data_size": 63488 00:12:49.833 }, 00:12:49.833 { 00:12:49.833 "name": "BaseBdev2", 00:12:49.833 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:49.833 "is_configured": true, 00:12:49.833 "data_offset": 2048, 00:12:49.833 "data_size": 63488 00:12:49.833 } 00:12:49.833 ] 00:12:49.833 }' 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.833 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.401 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:50.401 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:50.401 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:50.401 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:50.401 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:50.401 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.401 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:50.402 "name": "raid_bdev1", 00:12:50.402 "uuid": "01371d65-0da4-4b63-9ccf-24f11fd1185e", 00:12:50.402 "strip_size_kb": 0, 00:12:50.402 "state": "online", 00:12:50.402 "raid_level": "raid1", 00:12:50.402 "superblock": true, 00:12:50.402 "num_base_bdevs": 2, 00:12:50.402 "num_base_bdevs_discovered": 1, 00:12:50.402 "num_base_bdevs_operational": 1, 00:12:50.402 "base_bdevs_list": [ 00:12:50.402 { 00:12:50.402 "name": null, 00:12:50.402 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.402 "is_configured": false, 00:12:50.402 "data_offset": 0, 00:12:50.402 "data_size": 63488 00:12:50.402 }, 00:12:50.402 { 00:12:50.402 "name": "BaseBdev2", 00:12:50.402 "uuid": "0d50d3ed-c039-5952-8a1e-0f6abd7e42df", 00:12:50.402 "is_configured": true, 00:12:50.402 "data_offset": 2048, 00:12:50.402 "data_size": 63488 00:12:50.402 } 00:12:50.402 ] 00:12:50.402 }' 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76589 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76589 ']' 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76589 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76589 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:50.402 killing process with pid 76589 00:12:50.402 Received shutdown signal, test time was about 16.966379 seconds 00:12:50.402 00:12:50.402 Latency(us) 00:12:50.402 [2024-11-16T18:53:33.874Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:50.402 [2024-11-16T18:53:33.874Z] =================================================================================================================== 00:12:50.402 [2024-11-16T18:53:33.874Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76589' 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76589 00:12:50.402 [2024-11-16 18:53:33.802642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:50.402 [2024-11-16 18:53:33.802791] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.402 18:53:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76589 00:12:50.402 [2024-11-16 18:53:33.802861] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.402 [2024-11-16 18:53:33.802876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:12:50.661 [2024-11-16 18:53:34.023912] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:12:52.038 00:12:52.038 real 0m19.972s 00:12:52.038 user 0m25.977s 00:12:52.038 sys 0m2.127s 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:52.038 ************************************ 00:12:52.038 END TEST raid_rebuild_test_sb_io 00:12:52.038 ************************************ 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:12:52.038 18:53:35 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:52.038 18:53:35 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:12:52.038 18:53:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:52.038 18:53:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:52.038 18:53:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:52.038 ************************************ 00:12:52.038 START TEST raid_rebuild_test 00:12:52.038 ************************************ 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77282 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77282 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77282 ']' 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:52.038 18:53:35 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.038 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:52.038 Zero copy mechanism will not be used. 00:12:52.038 [2024-11-16 18:53:35.332346] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:52.038 [2024-11-16 18:53:35.332556] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77282 ] 00:12:52.038 [2024-11-16 18:53:35.505361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.297 [2024-11-16 18:53:35.618204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.555 [2024-11-16 18:53:35.807302] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.555 [2024-11-16 18:53:35.807397] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.814 BaseBdev1_malloc 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.814 [2024-11-16 18:53:36.203483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:52.814 [2024-11-16 18:53:36.203567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.814 [2024-11-16 18:53:36.203591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:52.814 [2024-11-16 18:53:36.203602] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.814 [2024-11-16 18:53:36.205801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.814 [2024-11-16 18:53:36.205920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:52.814 BaseBdev1 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.814 BaseBdev2_malloc 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.814 [2024-11-16 18:53:36.257903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:52.814 [2024-11-16 18:53:36.257987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.814 [2024-11-16 18:53:36.258006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:52.814 [2024-11-16 18:53:36.258019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.814 [2024-11-16 18:53:36.260034] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.814 [2024-11-16 18:53:36.260171] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:52.814 BaseBdev2 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.814 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.073 BaseBdev3_malloc 00:12:53.073 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.073 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:12:53.073 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.073 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.073 [2024-11-16 18:53:36.322136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:12:53.073 [2024-11-16 18:53:36.322195] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.074 [2024-11-16 18:53:36.322233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:53.074 [2024-11-16 18:53:36.322244] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.074 [2024-11-16 18:53:36.324276] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.074 [2024-11-16 18:53:36.324322] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:53.074 BaseBdev3 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.074 BaseBdev4_malloc 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.074 [2024-11-16 18:53:36.375785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:12:53.074 [2024-11-16 18:53:36.375843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.074 [2024-11-16 18:53:36.375868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:53.074 [2024-11-16 18:53:36.375878] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.074 [2024-11-16 18:53:36.377858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.074 [2024-11-16 18:53:36.377968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:53.074 BaseBdev4 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.074 spare_malloc 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.074 spare_delay 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.074 [2024-11-16 18:53:36.440251] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:53.074 [2024-11-16 18:53:36.440390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.074 [2024-11-16 18:53:36.440413] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:53.074 [2024-11-16 18:53:36.440424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.074 [2024-11-16 18:53:36.442455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.074 [2024-11-16 18:53:36.442493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:53.074 spare 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.074 [2024-11-16 18:53:36.452275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.074 [2024-11-16 18:53:36.454016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.074 [2024-11-16 18:53:36.454082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.074 [2024-11-16 18:53:36.454132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:53.074 [2024-11-16 18:53:36.454207] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:53.074 [2024-11-16 18:53:36.454220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:53.074 [2024-11-16 18:53:36.454467] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:53.074 [2024-11-16 18:53:36.454637] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:53.074 [2024-11-16 18:53:36.454649] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:53.074 [2024-11-16 18:53:36.454833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.074 "name": "raid_bdev1", 00:12:53.074 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:12:53.074 "strip_size_kb": 0, 00:12:53.074 "state": "online", 00:12:53.074 "raid_level": "raid1", 00:12:53.074 "superblock": false, 00:12:53.074 "num_base_bdevs": 4, 00:12:53.074 "num_base_bdevs_discovered": 4, 00:12:53.074 "num_base_bdevs_operational": 4, 00:12:53.074 "base_bdevs_list": [ 00:12:53.074 { 00:12:53.074 "name": "BaseBdev1", 00:12:53.074 "uuid": "6105ccc0-1802-500e-9133-1fd21448dead", 00:12:53.074 "is_configured": true, 00:12:53.074 "data_offset": 0, 00:12:53.074 "data_size": 65536 00:12:53.074 }, 00:12:53.074 { 00:12:53.074 "name": "BaseBdev2", 00:12:53.074 "uuid": "a1e2501e-dd83-5a4c-915d-b3f9863be3aa", 00:12:53.074 "is_configured": true, 00:12:53.074 "data_offset": 0, 00:12:53.074 "data_size": 65536 00:12:53.074 }, 00:12:53.074 { 00:12:53.074 "name": "BaseBdev3", 00:12:53.074 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:12:53.074 "is_configured": true, 00:12:53.074 "data_offset": 0, 00:12:53.074 "data_size": 65536 00:12:53.074 }, 00:12:53.074 { 00:12:53.074 "name": "BaseBdev4", 00:12:53.074 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:12:53.074 "is_configured": true, 00:12:53.074 "data_offset": 0, 00:12:53.074 "data_size": 65536 00:12:53.074 } 00:12:53.074 ] 00:12:53.074 }' 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.074 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.641 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:53.641 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:53.641 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.641 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.641 [2024-11-16 18:53:36.875976] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.641 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.641 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:53.641 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:53.642 18:53:36 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:53.901 [2024-11-16 18:53:37.151210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:53.901 /dev/nbd0 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:53.901 1+0 records in 00:12:53.901 1+0 records out 00:12:53.901 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347365 s, 11.8 MB/s 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:53.901 18:53:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:59.207 65536+0 records in 00:12:59.207 65536+0 records out 00:12:59.207 33554432 bytes (34 MB, 32 MiB) copied, 5.08295 s, 6.6 MB/s 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:59.207 [2024-11-16 18:53:42.510017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.207 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.208 [2024-11-16 18:53:42.518105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.208 "name": "raid_bdev1", 00:12:59.208 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:12:59.208 "strip_size_kb": 0, 00:12:59.208 "state": "online", 00:12:59.208 "raid_level": "raid1", 00:12:59.208 "superblock": false, 00:12:59.208 "num_base_bdevs": 4, 00:12:59.208 "num_base_bdevs_discovered": 3, 00:12:59.208 "num_base_bdevs_operational": 3, 00:12:59.208 "base_bdevs_list": [ 00:12:59.208 { 00:12:59.208 "name": null, 00:12:59.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.208 "is_configured": false, 00:12:59.208 "data_offset": 0, 00:12:59.208 "data_size": 65536 00:12:59.208 }, 00:12:59.208 { 00:12:59.208 "name": "BaseBdev2", 00:12:59.208 "uuid": "a1e2501e-dd83-5a4c-915d-b3f9863be3aa", 00:12:59.208 "is_configured": true, 00:12:59.208 "data_offset": 0, 00:12:59.208 "data_size": 65536 00:12:59.208 }, 00:12:59.208 { 00:12:59.208 "name": "BaseBdev3", 00:12:59.208 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:12:59.208 "is_configured": true, 00:12:59.208 "data_offset": 0, 00:12:59.208 "data_size": 65536 00:12:59.208 }, 00:12:59.208 { 00:12:59.208 "name": "BaseBdev4", 00:12:59.208 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:12:59.208 "is_configured": true, 00:12:59.208 "data_offset": 0, 00:12:59.208 "data_size": 65536 00:12:59.208 } 00:12:59.208 ] 00:12:59.208 }' 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.208 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.775 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.775 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.775 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.775 [2024-11-16 18:53:42.949343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.775 [2024-11-16 18:53:42.963328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:12:59.775 18:53:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.775 18:53:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:59.775 [2024-11-16 18:53:42.965210] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.711 18:53:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.711 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.711 "name": "raid_bdev1", 00:13:00.711 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:00.711 "strip_size_kb": 0, 00:13:00.711 "state": "online", 00:13:00.711 "raid_level": "raid1", 00:13:00.711 "superblock": false, 00:13:00.711 "num_base_bdevs": 4, 00:13:00.711 "num_base_bdevs_discovered": 4, 00:13:00.711 "num_base_bdevs_operational": 4, 00:13:00.711 "process": { 00:13:00.711 "type": "rebuild", 00:13:00.711 "target": "spare", 00:13:00.711 "progress": { 00:13:00.711 "blocks": 20480, 00:13:00.711 "percent": 31 00:13:00.711 } 00:13:00.711 }, 00:13:00.711 "base_bdevs_list": [ 00:13:00.711 { 00:13:00.711 "name": "spare", 00:13:00.711 "uuid": "3eac07fb-a4af-5184-a566-d1fa086c3565", 00:13:00.711 "is_configured": true, 00:13:00.711 "data_offset": 0, 00:13:00.711 "data_size": 65536 00:13:00.711 }, 00:13:00.711 { 00:13:00.711 "name": "BaseBdev2", 00:13:00.711 "uuid": "a1e2501e-dd83-5a4c-915d-b3f9863be3aa", 00:13:00.711 "is_configured": true, 00:13:00.711 "data_offset": 0, 00:13:00.711 "data_size": 65536 00:13:00.711 }, 00:13:00.711 { 00:13:00.711 "name": "BaseBdev3", 00:13:00.711 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:00.711 "is_configured": true, 00:13:00.711 "data_offset": 0, 00:13:00.711 "data_size": 65536 00:13:00.711 }, 00:13:00.711 { 00:13:00.711 "name": "BaseBdev4", 00:13:00.711 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:00.711 "is_configured": true, 00:13:00.711 "data_offset": 0, 00:13:00.711 "data_size": 65536 00:13:00.711 } 00:13:00.711 ] 00:13:00.711 }' 00:13:00.711 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.711 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.711 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.711 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.711 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:00.711 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.711 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.711 [2024-11-16 18:53:44.080394] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.711 [2024-11-16 18:53:44.170293] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:00.711 [2024-11-16 18:53:44.170375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.711 [2024-11-16 18:53:44.170391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.711 [2024-11-16 18:53:44.170400] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.970 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.970 "name": "raid_bdev1", 00:13:00.970 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:00.970 "strip_size_kb": 0, 00:13:00.970 "state": "online", 00:13:00.970 "raid_level": "raid1", 00:13:00.970 "superblock": false, 00:13:00.970 "num_base_bdevs": 4, 00:13:00.970 "num_base_bdevs_discovered": 3, 00:13:00.970 "num_base_bdevs_operational": 3, 00:13:00.970 "base_bdevs_list": [ 00:13:00.970 { 00:13:00.970 "name": null, 00:13:00.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.971 "is_configured": false, 00:13:00.971 "data_offset": 0, 00:13:00.971 "data_size": 65536 00:13:00.971 }, 00:13:00.971 { 00:13:00.971 "name": "BaseBdev2", 00:13:00.971 "uuid": "a1e2501e-dd83-5a4c-915d-b3f9863be3aa", 00:13:00.971 "is_configured": true, 00:13:00.971 "data_offset": 0, 00:13:00.971 "data_size": 65536 00:13:00.971 }, 00:13:00.971 { 00:13:00.971 "name": "BaseBdev3", 00:13:00.971 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:00.971 "is_configured": true, 00:13:00.971 "data_offset": 0, 00:13:00.971 "data_size": 65536 00:13:00.971 }, 00:13:00.971 { 00:13:00.971 "name": "BaseBdev4", 00:13:00.971 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:00.971 "is_configured": true, 00:13:00.971 "data_offset": 0, 00:13:00.971 "data_size": 65536 00:13:00.971 } 00:13:00.971 ] 00:13:00.971 }' 00:13:00.971 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.971 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.229 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:01.229 "name": "raid_bdev1", 00:13:01.229 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:01.229 "strip_size_kb": 0, 00:13:01.229 "state": "online", 00:13:01.229 "raid_level": "raid1", 00:13:01.229 "superblock": false, 00:13:01.229 "num_base_bdevs": 4, 00:13:01.229 "num_base_bdevs_discovered": 3, 00:13:01.229 "num_base_bdevs_operational": 3, 00:13:01.229 "base_bdevs_list": [ 00:13:01.229 { 00:13:01.229 "name": null, 00:13:01.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.229 "is_configured": false, 00:13:01.229 "data_offset": 0, 00:13:01.229 "data_size": 65536 00:13:01.229 }, 00:13:01.229 { 00:13:01.229 "name": "BaseBdev2", 00:13:01.229 "uuid": "a1e2501e-dd83-5a4c-915d-b3f9863be3aa", 00:13:01.229 "is_configured": true, 00:13:01.229 "data_offset": 0, 00:13:01.229 "data_size": 65536 00:13:01.229 }, 00:13:01.230 { 00:13:01.230 "name": "BaseBdev3", 00:13:01.230 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:01.230 "is_configured": true, 00:13:01.230 "data_offset": 0, 00:13:01.230 "data_size": 65536 00:13:01.230 }, 00:13:01.230 { 00:13:01.230 "name": "BaseBdev4", 00:13:01.230 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:01.230 "is_configured": true, 00:13:01.230 "data_offset": 0, 00:13:01.230 "data_size": 65536 00:13:01.230 } 00:13:01.230 ] 00:13:01.230 }' 00:13:01.230 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:01.230 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:01.230 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:01.488 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:01.488 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:01.488 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.488 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.488 [2024-11-16 18:53:44.734445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.488 [2024-11-16 18:53:44.748841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:01.488 18:53:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.488 18:53:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:01.488 [2024-11-16 18:53:44.750699] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.451 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.452 "name": "raid_bdev1", 00:13:02.452 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:02.452 "strip_size_kb": 0, 00:13:02.452 "state": "online", 00:13:02.452 "raid_level": "raid1", 00:13:02.452 "superblock": false, 00:13:02.452 "num_base_bdevs": 4, 00:13:02.452 "num_base_bdevs_discovered": 4, 00:13:02.452 "num_base_bdevs_operational": 4, 00:13:02.452 "process": { 00:13:02.452 "type": "rebuild", 00:13:02.452 "target": "spare", 00:13:02.452 "progress": { 00:13:02.452 "blocks": 20480, 00:13:02.452 "percent": 31 00:13:02.452 } 00:13:02.452 }, 00:13:02.452 "base_bdevs_list": [ 00:13:02.452 { 00:13:02.452 "name": "spare", 00:13:02.452 "uuid": "3eac07fb-a4af-5184-a566-d1fa086c3565", 00:13:02.452 "is_configured": true, 00:13:02.452 "data_offset": 0, 00:13:02.452 "data_size": 65536 00:13:02.452 }, 00:13:02.452 { 00:13:02.452 "name": "BaseBdev2", 00:13:02.452 "uuid": "a1e2501e-dd83-5a4c-915d-b3f9863be3aa", 00:13:02.452 "is_configured": true, 00:13:02.452 "data_offset": 0, 00:13:02.452 "data_size": 65536 00:13:02.452 }, 00:13:02.452 { 00:13:02.452 "name": "BaseBdev3", 00:13:02.452 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:02.452 "is_configured": true, 00:13:02.452 "data_offset": 0, 00:13:02.452 "data_size": 65536 00:13:02.452 }, 00:13:02.452 { 00:13:02.452 "name": "BaseBdev4", 00:13:02.452 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:02.452 "is_configured": true, 00:13:02.452 "data_offset": 0, 00:13:02.452 "data_size": 65536 00:13:02.452 } 00:13:02.452 ] 00:13:02.452 }' 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.452 18:53:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.452 [2024-11-16 18:53:45.909983] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:02.711 [2024-11-16 18:53:45.955534] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.711 18:53:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.711 "name": "raid_bdev1", 00:13:02.711 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:02.711 "strip_size_kb": 0, 00:13:02.711 "state": "online", 00:13:02.711 "raid_level": "raid1", 00:13:02.711 "superblock": false, 00:13:02.711 "num_base_bdevs": 4, 00:13:02.711 "num_base_bdevs_discovered": 3, 00:13:02.711 "num_base_bdevs_operational": 3, 00:13:02.711 "process": { 00:13:02.711 "type": "rebuild", 00:13:02.711 "target": "spare", 00:13:02.711 "progress": { 00:13:02.711 "blocks": 24576, 00:13:02.711 "percent": 37 00:13:02.711 } 00:13:02.711 }, 00:13:02.711 "base_bdevs_list": [ 00:13:02.711 { 00:13:02.711 "name": "spare", 00:13:02.711 "uuid": "3eac07fb-a4af-5184-a566-d1fa086c3565", 00:13:02.711 "is_configured": true, 00:13:02.711 "data_offset": 0, 00:13:02.711 "data_size": 65536 00:13:02.711 }, 00:13:02.711 { 00:13:02.711 "name": null, 00:13:02.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.711 "is_configured": false, 00:13:02.711 "data_offset": 0, 00:13:02.711 "data_size": 65536 00:13:02.711 }, 00:13:02.711 { 00:13:02.711 "name": "BaseBdev3", 00:13:02.711 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:02.711 "is_configured": true, 00:13:02.711 "data_offset": 0, 00:13:02.711 "data_size": 65536 00:13:02.711 }, 00:13:02.711 { 00:13:02.711 "name": "BaseBdev4", 00:13:02.711 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:02.711 "is_configured": true, 00:13:02.711 "data_offset": 0, 00:13:02.711 "data_size": 65536 00:13:02.711 } 00:13:02.711 ] 00:13:02.711 }' 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=428 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.711 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.711 "name": "raid_bdev1", 00:13:02.711 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:02.711 "strip_size_kb": 0, 00:13:02.711 "state": "online", 00:13:02.711 "raid_level": "raid1", 00:13:02.711 "superblock": false, 00:13:02.711 "num_base_bdevs": 4, 00:13:02.711 "num_base_bdevs_discovered": 3, 00:13:02.711 "num_base_bdevs_operational": 3, 00:13:02.711 "process": { 00:13:02.711 "type": "rebuild", 00:13:02.711 "target": "spare", 00:13:02.711 "progress": { 00:13:02.711 "blocks": 26624, 00:13:02.711 "percent": 40 00:13:02.711 } 00:13:02.711 }, 00:13:02.711 "base_bdevs_list": [ 00:13:02.712 { 00:13:02.712 "name": "spare", 00:13:02.712 "uuid": "3eac07fb-a4af-5184-a566-d1fa086c3565", 00:13:02.712 "is_configured": true, 00:13:02.712 "data_offset": 0, 00:13:02.712 "data_size": 65536 00:13:02.712 }, 00:13:02.712 { 00:13:02.712 "name": null, 00:13:02.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.712 "is_configured": false, 00:13:02.712 "data_offset": 0, 00:13:02.712 "data_size": 65536 00:13:02.712 }, 00:13:02.712 { 00:13:02.712 "name": "BaseBdev3", 00:13:02.712 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:02.712 "is_configured": true, 00:13:02.712 "data_offset": 0, 00:13:02.712 "data_size": 65536 00:13:02.712 }, 00:13:02.712 { 00:13:02.712 "name": "BaseBdev4", 00:13:02.712 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:02.712 "is_configured": true, 00:13:02.712 "data_offset": 0, 00:13:02.712 "data_size": 65536 00:13:02.712 } 00:13:02.712 ] 00:13:02.712 }' 00:13:02.712 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.712 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.712 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.971 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.971 18:53:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:03.907 "name": "raid_bdev1", 00:13:03.907 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:03.907 "strip_size_kb": 0, 00:13:03.907 "state": "online", 00:13:03.907 "raid_level": "raid1", 00:13:03.907 "superblock": false, 00:13:03.907 "num_base_bdevs": 4, 00:13:03.907 "num_base_bdevs_discovered": 3, 00:13:03.907 "num_base_bdevs_operational": 3, 00:13:03.907 "process": { 00:13:03.907 "type": "rebuild", 00:13:03.907 "target": "spare", 00:13:03.907 "progress": { 00:13:03.907 "blocks": 49152, 00:13:03.907 "percent": 75 00:13:03.907 } 00:13:03.907 }, 00:13:03.907 "base_bdevs_list": [ 00:13:03.907 { 00:13:03.907 "name": "spare", 00:13:03.907 "uuid": "3eac07fb-a4af-5184-a566-d1fa086c3565", 00:13:03.907 "is_configured": true, 00:13:03.907 "data_offset": 0, 00:13:03.907 "data_size": 65536 00:13:03.907 }, 00:13:03.907 { 00:13:03.907 "name": null, 00:13:03.907 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.907 "is_configured": false, 00:13:03.907 "data_offset": 0, 00:13:03.907 "data_size": 65536 00:13:03.907 }, 00:13:03.907 { 00:13:03.907 "name": "BaseBdev3", 00:13:03.907 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:03.907 "is_configured": true, 00:13:03.907 "data_offset": 0, 00:13:03.907 "data_size": 65536 00:13:03.907 }, 00:13:03.907 { 00:13:03.907 "name": "BaseBdev4", 00:13:03.907 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:03.907 "is_configured": true, 00:13:03.907 "data_offset": 0, 00:13:03.907 "data_size": 65536 00:13:03.907 } 00:13:03.907 ] 00:13:03.907 }' 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:03.907 18:53:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:04.843 [2024-11-16 18:53:47.963238] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:04.843 [2024-11-16 18:53:47.963395] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:04.843 [2024-11-16 18:53:47.963461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.102 "name": "raid_bdev1", 00:13:05.102 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:05.102 "strip_size_kb": 0, 00:13:05.102 "state": "online", 00:13:05.102 "raid_level": "raid1", 00:13:05.102 "superblock": false, 00:13:05.102 "num_base_bdevs": 4, 00:13:05.102 "num_base_bdevs_discovered": 3, 00:13:05.102 "num_base_bdevs_operational": 3, 00:13:05.102 "base_bdevs_list": [ 00:13:05.102 { 00:13:05.102 "name": "spare", 00:13:05.102 "uuid": "3eac07fb-a4af-5184-a566-d1fa086c3565", 00:13:05.102 "is_configured": true, 00:13:05.102 "data_offset": 0, 00:13:05.102 "data_size": 65536 00:13:05.102 }, 00:13:05.102 { 00:13:05.102 "name": null, 00:13:05.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.102 "is_configured": false, 00:13:05.102 "data_offset": 0, 00:13:05.102 "data_size": 65536 00:13:05.102 }, 00:13:05.102 { 00:13:05.102 "name": "BaseBdev3", 00:13:05.102 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:05.102 "is_configured": true, 00:13:05.102 "data_offset": 0, 00:13:05.102 "data_size": 65536 00:13:05.102 }, 00:13:05.102 { 00:13:05.102 "name": "BaseBdev4", 00:13:05.102 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:05.102 "is_configured": true, 00:13:05.102 "data_offset": 0, 00:13:05.102 "data_size": 65536 00:13:05.102 } 00:13:05.102 ] 00:13:05.102 }' 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:05.102 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:05.103 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:05.103 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.103 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.103 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.103 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.103 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.103 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:05.103 "name": "raid_bdev1", 00:13:05.103 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:05.103 "strip_size_kb": 0, 00:13:05.103 "state": "online", 00:13:05.103 "raid_level": "raid1", 00:13:05.103 "superblock": false, 00:13:05.103 "num_base_bdevs": 4, 00:13:05.103 "num_base_bdevs_discovered": 3, 00:13:05.103 "num_base_bdevs_operational": 3, 00:13:05.103 "base_bdevs_list": [ 00:13:05.103 { 00:13:05.103 "name": "spare", 00:13:05.103 "uuid": "3eac07fb-a4af-5184-a566-d1fa086c3565", 00:13:05.103 "is_configured": true, 00:13:05.103 "data_offset": 0, 00:13:05.103 "data_size": 65536 00:13:05.103 }, 00:13:05.103 { 00:13:05.103 "name": null, 00:13:05.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.103 "is_configured": false, 00:13:05.103 "data_offset": 0, 00:13:05.103 "data_size": 65536 00:13:05.103 }, 00:13:05.103 { 00:13:05.103 "name": "BaseBdev3", 00:13:05.103 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:05.103 "is_configured": true, 00:13:05.103 "data_offset": 0, 00:13:05.103 "data_size": 65536 00:13:05.103 }, 00:13:05.103 { 00:13:05.103 "name": "BaseBdev4", 00:13:05.103 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:05.103 "is_configured": true, 00:13:05.103 "data_offset": 0, 00:13:05.103 "data_size": 65536 00:13:05.103 } 00:13:05.103 ] 00:13:05.103 }' 00:13:05.103 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.362 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.362 "name": "raid_bdev1", 00:13:05.362 "uuid": "eb7f23ad-b6e5-4d90-b244-6e3e3d04555f", 00:13:05.362 "strip_size_kb": 0, 00:13:05.362 "state": "online", 00:13:05.362 "raid_level": "raid1", 00:13:05.362 "superblock": false, 00:13:05.362 "num_base_bdevs": 4, 00:13:05.362 "num_base_bdevs_discovered": 3, 00:13:05.362 "num_base_bdevs_operational": 3, 00:13:05.362 "base_bdevs_list": [ 00:13:05.362 { 00:13:05.362 "name": "spare", 00:13:05.362 "uuid": "3eac07fb-a4af-5184-a566-d1fa086c3565", 00:13:05.362 "is_configured": true, 00:13:05.362 "data_offset": 0, 00:13:05.362 "data_size": 65536 00:13:05.362 }, 00:13:05.362 { 00:13:05.362 "name": null, 00:13:05.363 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.363 "is_configured": false, 00:13:05.363 "data_offset": 0, 00:13:05.363 "data_size": 65536 00:13:05.363 }, 00:13:05.363 { 00:13:05.363 "name": "BaseBdev3", 00:13:05.363 "uuid": "34af692d-3721-561a-9012-4e9ca8d794f9", 00:13:05.363 "is_configured": true, 00:13:05.363 "data_offset": 0, 00:13:05.363 "data_size": 65536 00:13:05.363 }, 00:13:05.363 { 00:13:05.363 "name": "BaseBdev4", 00:13:05.363 "uuid": "a1dd3b37-b5d3-54af-9839-10d10ea96513", 00:13:05.363 "is_configured": true, 00:13:05.363 "data_offset": 0, 00:13:05.363 "data_size": 65536 00:13:05.363 } 00:13:05.363 ] 00:13:05.363 }' 00:13:05.363 18:53:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.363 18:53:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.622 [2024-11-16 18:53:49.056004] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.622 [2024-11-16 18:53:49.056035] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.622 [2024-11-16 18:53:49.056118] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.622 [2024-11-16 18:53:49.056198] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.622 [2024-11-16 18:53:49.056208] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:05.622 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:05.880 /dev/nbd0 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:05.880 1+0 records in 00:13:05.880 1+0 records out 00:13:05.880 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000313169 s, 13.1 MB/s 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:05.880 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:06.138 /dev/nbd1 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:06.138 1+0 records in 00:13:06.138 1+0 records out 00:13:06.138 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374147 s, 10.9 MB/s 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.138 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:06.139 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:06.139 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:06.139 18:53:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:06.139 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:06.139 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:06.139 18:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:06.397 18:53:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:06.397 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:06.397 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:06.397 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:06.397 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:06.397 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.397 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:06.656 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:06.656 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:06.656 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:06.656 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.656 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.656 18:53:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:06.656 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:06.656 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.656 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:06.656 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77282 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77282 ']' 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77282 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77282 00:13:06.915 killing process with pid 77282 00:13:06.915 Received shutdown signal, test time was about 60.000000 seconds 00:13:06.915 00:13:06.915 Latency(us) 00:13:06.915 [2024-11-16T18:53:50.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.915 [2024-11-16T18:53:50.387Z] =================================================================================================================== 00:13:06.915 [2024-11-16T18:53:50.387Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77282' 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77282 00:13:06.915 [2024-11-16 18:53:50.250531] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.915 18:53:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77282 00:13:07.483 [2024-11-16 18:53:50.737476] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:08.421 18:53:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:08.421 00:13:08.421 real 0m16.601s 00:13:08.421 user 0m18.595s 00:13:08.421 sys 0m2.880s 00:13:08.421 ************************************ 00:13:08.421 END TEST raid_rebuild_test 00:13:08.421 ************************************ 00:13:08.421 18:53:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.421 18:53:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.421 18:53:51 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:08.421 18:53:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:08.421 18:53:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.421 18:53:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 ************************************ 00:13:08.680 START TEST raid_rebuild_test_sb 00:13:08.680 ************************************ 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77717 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77717 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77717 ']' 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:08.680 18:53:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:08.680 [2024-11-16 18:53:52.016385] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:08.680 [2024-11-16 18:53:52.016615] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77717 ] 00:13:08.680 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:08.681 Zero copy mechanism will not be used. 00:13:08.939 [2024-11-16 18:53:52.201594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:08.939 [2024-11-16 18:53:52.319024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.197 [2024-11-16 18:53:52.525800] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.197 [2024-11-16 18:53:52.525887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.456 BaseBdev1_malloc 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.456 [2024-11-16 18:53:52.915007] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:09.456 [2024-11-16 18:53:52.915139] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.456 [2024-11-16 18:53:52.915194] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:09.456 [2024-11-16 18:53:52.915229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.456 [2024-11-16 18:53:52.917339] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.456 [2024-11-16 18:53:52.917417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:09.456 BaseBdev1 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.456 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 BaseBdev2_malloc 00:13:09.716 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:09.716 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 [2024-11-16 18:53:52.969974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:09.716 [2024-11-16 18:53:52.970032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.716 [2024-11-16 18:53:52.970052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:09.716 [2024-11-16 18:53:52.970064] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.716 [2024-11-16 18:53:52.972136] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.716 [2024-11-16 18:53:52.972176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:09.716 BaseBdev2 00:13:09.716 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.716 18:53:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:09.716 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 BaseBdev3_malloc 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 [2024-11-16 18:53:53.036062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:09.716 [2024-11-16 18:53:53.036185] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.716 [2024-11-16 18:53:53.036210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:09.716 [2024-11-16 18:53:53.036221] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.716 [2024-11-16 18:53:53.038258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.716 [2024-11-16 18:53:53.038300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:09.716 BaseBdev3 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 BaseBdev4_malloc 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 [2024-11-16 18:53:53.089708] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:09.716 [2024-11-16 18:53:53.089761] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.716 [2024-11-16 18:53:53.089779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:09.716 [2024-11-16 18:53:53.089790] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.716 [2024-11-16 18:53:53.091821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.716 [2024-11-16 18:53:53.091859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:09.716 BaseBdev4 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 spare_malloc 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 spare_delay 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 [2024-11-16 18:53:53.156982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:09.716 [2024-11-16 18:53:53.157037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:09.716 [2024-11-16 18:53:53.157083] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:09.716 [2024-11-16 18:53:53.157093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:09.716 [2024-11-16 18:53:53.159129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:09.716 [2024-11-16 18:53:53.159224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:09.716 spare 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.716 [2024-11-16 18:53:53.169020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.716 [2024-11-16 18:53:53.170815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:09.716 [2024-11-16 18:53:53.170884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:09.716 [2024-11-16 18:53:53.170937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:09.716 [2024-11-16 18:53:53.171135] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:09.716 [2024-11-16 18:53:53.171153] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:09.716 [2024-11-16 18:53:53.171387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:09.716 [2024-11-16 18:53:53.171563] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:09.716 [2024-11-16 18:53:53.171573] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:09.716 [2024-11-16 18:53:53.171744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.716 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:09.975 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.975 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.975 "name": "raid_bdev1", 00:13:09.975 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:09.975 "strip_size_kb": 0, 00:13:09.976 "state": "online", 00:13:09.976 "raid_level": "raid1", 00:13:09.976 "superblock": true, 00:13:09.976 "num_base_bdevs": 4, 00:13:09.976 "num_base_bdevs_discovered": 4, 00:13:09.976 "num_base_bdevs_operational": 4, 00:13:09.976 "base_bdevs_list": [ 00:13:09.976 { 00:13:09.976 "name": "BaseBdev1", 00:13:09.976 "uuid": "c8b47a60-c237-5782-bf37-a5aef660647f", 00:13:09.976 "is_configured": true, 00:13:09.976 "data_offset": 2048, 00:13:09.976 "data_size": 63488 00:13:09.976 }, 00:13:09.976 { 00:13:09.976 "name": "BaseBdev2", 00:13:09.976 "uuid": "d4d3b009-9063-57c4-a021-cb63e0ec2150", 00:13:09.976 "is_configured": true, 00:13:09.976 "data_offset": 2048, 00:13:09.976 "data_size": 63488 00:13:09.976 }, 00:13:09.976 { 00:13:09.976 "name": "BaseBdev3", 00:13:09.976 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:09.976 "is_configured": true, 00:13:09.976 "data_offset": 2048, 00:13:09.976 "data_size": 63488 00:13:09.976 }, 00:13:09.976 { 00:13:09.976 "name": "BaseBdev4", 00:13:09.976 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:09.976 "is_configured": true, 00:13:09.976 "data_offset": 2048, 00:13:09.976 "data_size": 63488 00:13:09.976 } 00:13:09.976 ] 00:13:09.976 }' 00:13:09.976 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.976 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.234 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:10.234 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:10.234 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.234 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.234 [2024-11-16 18:53:53.688470] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.493 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:10.493 [2024-11-16 18:53:53.947766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:10.493 /dev/nbd0 00:13:10.751 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:10.751 18:53:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:10.751 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:10.751 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:10.751 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:10.751 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:10.751 18:53:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:10.751 18:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:10.751 18:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:10.751 18:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:10.752 1+0 records in 00:13:10.752 1+0 records out 00:13:10.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355234 s, 11.5 MB/s 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:10.752 18:53:54 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:16.065 63488+0 records in 00:13:16.065 63488+0 records out 00:13:16.065 32505856 bytes (33 MB, 31 MiB) copied, 4.98049 s, 6.5 MB/s 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:16.065 [2024-11-16 18:53:59.231017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:16.065 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.066 [2024-11-16 18:53:59.267442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.066 "name": "raid_bdev1", 00:13:16.066 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:16.066 "strip_size_kb": 0, 00:13:16.066 "state": "online", 00:13:16.066 "raid_level": "raid1", 00:13:16.066 "superblock": true, 00:13:16.066 "num_base_bdevs": 4, 00:13:16.066 "num_base_bdevs_discovered": 3, 00:13:16.066 "num_base_bdevs_operational": 3, 00:13:16.066 "base_bdevs_list": [ 00:13:16.066 { 00:13:16.066 "name": null, 00:13:16.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.066 "is_configured": false, 00:13:16.066 "data_offset": 0, 00:13:16.066 "data_size": 63488 00:13:16.066 }, 00:13:16.066 { 00:13:16.066 "name": "BaseBdev2", 00:13:16.066 "uuid": "d4d3b009-9063-57c4-a021-cb63e0ec2150", 00:13:16.066 "is_configured": true, 00:13:16.066 "data_offset": 2048, 00:13:16.066 "data_size": 63488 00:13:16.066 }, 00:13:16.066 { 00:13:16.066 "name": "BaseBdev3", 00:13:16.066 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:16.066 "is_configured": true, 00:13:16.066 "data_offset": 2048, 00:13:16.066 "data_size": 63488 00:13:16.066 }, 00:13:16.066 { 00:13:16.066 "name": "BaseBdev4", 00:13:16.066 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:16.066 "is_configured": true, 00:13:16.066 "data_offset": 2048, 00:13:16.066 "data_size": 63488 00:13:16.066 } 00:13:16.066 ] 00:13:16.066 }' 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.066 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.326 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:16.326 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.326 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.326 [2024-11-16 18:53:59.706735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:16.326 [2024-11-16 18:53:59.723161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:13:16.326 18:53:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.326 [2024-11-16 18:53:59.725221] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:16.326 18:53:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:17.264 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.264 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.264 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.264 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.264 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.525 "name": "raid_bdev1", 00:13:17.525 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:17.525 "strip_size_kb": 0, 00:13:17.525 "state": "online", 00:13:17.525 "raid_level": "raid1", 00:13:17.525 "superblock": true, 00:13:17.525 "num_base_bdevs": 4, 00:13:17.525 "num_base_bdevs_discovered": 4, 00:13:17.525 "num_base_bdevs_operational": 4, 00:13:17.525 "process": { 00:13:17.525 "type": "rebuild", 00:13:17.525 "target": "spare", 00:13:17.525 "progress": { 00:13:17.525 "blocks": 20480, 00:13:17.525 "percent": 32 00:13:17.525 } 00:13:17.525 }, 00:13:17.525 "base_bdevs_list": [ 00:13:17.525 { 00:13:17.525 "name": "spare", 00:13:17.525 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:17.525 "is_configured": true, 00:13:17.525 "data_offset": 2048, 00:13:17.525 "data_size": 63488 00:13:17.525 }, 00:13:17.525 { 00:13:17.525 "name": "BaseBdev2", 00:13:17.525 "uuid": "d4d3b009-9063-57c4-a021-cb63e0ec2150", 00:13:17.525 "is_configured": true, 00:13:17.525 "data_offset": 2048, 00:13:17.525 "data_size": 63488 00:13:17.525 }, 00:13:17.525 { 00:13:17.525 "name": "BaseBdev3", 00:13:17.525 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:17.525 "is_configured": true, 00:13:17.525 "data_offset": 2048, 00:13:17.525 "data_size": 63488 00:13:17.525 }, 00:13:17.525 { 00:13:17.525 "name": "BaseBdev4", 00:13:17.525 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:17.525 "is_configured": true, 00:13:17.525 "data_offset": 2048, 00:13:17.525 "data_size": 63488 00:13:17.525 } 00:13:17.525 ] 00:13:17.525 }' 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.525 [2024-11-16 18:54:00.864785] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.525 [2024-11-16 18:54:00.930226] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:17.525 [2024-11-16 18:54:00.930341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.525 [2024-11-16 18:54:00.930401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:17.525 [2024-11-16 18:54:00.930427] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.525 18:54:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.785 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.785 "name": "raid_bdev1", 00:13:17.785 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:17.785 "strip_size_kb": 0, 00:13:17.785 "state": "online", 00:13:17.785 "raid_level": "raid1", 00:13:17.785 "superblock": true, 00:13:17.785 "num_base_bdevs": 4, 00:13:17.785 "num_base_bdevs_discovered": 3, 00:13:17.785 "num_base_bdevs_operational": 3, 00:13:17.785 "base_bdevs_list": [ 00:13:17.785 { 00:13:17.785 "name": null, 00:13:17.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.785 "is_configured": false, 00:13:17.785 "data_offset": 0, 00:13:17.785 "data_size": 63488 00:13:17.785 }, 00:13:17.785 { 00:13:17.785 "name": "BaseBdev2", 00:13:17.785 "uuid": "d4d3b009-9063-57c4-a021-cb63e0ec2150", 00:13:17.785 "is_configured": true, 00:13:17.785 "data_offset": 2048, 00:13:17.785 "data_size": 63488 00:13:17.785 }, 00:13:17.785 { 00:13:17.785 "name": "BaseBdev3", 00:13:17.785 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:17.785 "is_configured": true, 00:13:17.785 "data_offset": 2048, 00:13:17.785 "data_size": 63488 00:13:17.785 }, 00:13:17.785 { 00:13:17.785 "name": "BaseBdev4", 00:13:17.785 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:17.785 "is_configured": true, 00:13:17.785 "data_offset": 2048, 00:13:17.785 "data_size": 63488 00:13:17.785 } 00:13:17.785 ] 00:13:17.785 }' 00:13:17.785 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.785 18:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.045 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:18.045 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:18.045 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:18.045 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:18.046 "name": "raid_bdev1", 00:13:18.046 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:18.046 "strip_size_kb": 0, 00:13:18.046 "state": "online", 00:13:18.046 "raid_level": "raid1", 00:13:18.046 "superblock": true, 00:13:18.046 "num_base_bdevs": 4, 00:13:18.046 "num_base_bdevs_discovered": 3, 00:13:18.046 "num_base_bdevs_operational": 3, 00:13:18.046 "base_bdevs_list": [ 00:13:18.046 { 00:13:18.046 "name": null, 00:13:18.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.046 "is_configured": false, 00:13:18.046 "data_offset": 0, 00:13:18.046 "data_size": 63488 00:13:18.046 }, 00:13:18.046 { 00:13:18.046 "name": "BaseBdev2", 00:13:18.046 "uuid": "d4d3b009-9063-57c4-a021-cb63e0ec2150", 00:13:18.046 "is_configured": true, 00:13:18.046 "data_offset": 2048, 00:13:18.046 "data_size": 63488 00:13:18.046 }, 00:13:18.046 { 00:13:18.046 "name": "BaseBdev3", 00:13:18.046 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:18.046 "is_configured": true, 00:13:18.046 "data_offset": 2048, 00:13:18.046 "data_size": 63488 00:13:18.046 }, 00:13:18.046 { 00:13:18.046 "name": "BaseBdev4", 00:13:18.046 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:18.046 "is_configured": true, 00:13:18.046 "data_offset": 2048, 00:13:18.046 "data_size": 63488 00:13:18.046 } 00:13:18.046 ] 00:13:18.046 }' 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.046 18:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.046 [2024-11-16 18:54:01.503431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:18.305 [2024-11-16 18:54:01.518375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:13:18.305 18:54:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.305 18:54:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:18.305 [2024-11-16 18:54:01.520392] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.242 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.242 "name": "raid_bdev1", 00:13:19.242 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:19.242 "strip_size_kb": 0, 00:13:19.242 "state": "online", 00:13:19.242 "raid_level": "raid1", 00:13:19.242 "superblock": true, 00:13:19.242 "num_base_bdevs": 4, 00:13:19.242 "num_base_bdevs_discovered": 4, 00:13:19.242 "num_base_bdevs_operational": 4, 00:13:19.242 "process": { 00:13:19.242 "type": "rebuild", 00:13:19.242 "target": "spare", 00:13:19.242 "progress": { 00:13:19.242 "blocks": 20480, 00:13:19.242 "percent": 32 00:13:19.242 } 00:13:19.242 }, 00:13:19.242 "base_bdevs_list": [ 00:13:19.242 { 00:13:19.243 "name": "spare", 00:13:19.243 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:19.243 "is_configured": true, 00:13:19.243 "data_offset": 2048, 00:13:19.243 "data_size": 63488 00:13:19.243 }, 00:13:19.243 { 00:13:19.243 "name": "BaseBdev2", 00:13:19.243 "uuid": "d4d3b009-9063-57c4-a021-cb63e0ec2150", 00:13:19.243 "is_configured": true, 00:13:19.243 "data_offset": 2048, 00:13:19.243 "data_size": 63488 00:13:19.243 }, 00:13:19.243 { 00:13:19.243 "name": "BaseBdev3", 00:13:19.243 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:19.243 "is_configured": true, 00:13:19.243 "data_offset": 2048, 00:13:19.243 "data_size": 63488 00:13:19.243 }, 00:13:19.243 { 00:13:19.243 "name": "BaseBdev4", 00:13:19.243 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:19.243 "is_configured": true, 00:13:19.243 "data_offset": 2048, 00:13:19.243 "data_size": 63488 00:13:19.243 } 00:13:19.243 ] 00:13:19.243 }' 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:19.243 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.243 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.243 [2024-11-16 18:54:02.640029] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:19.502 [2024-11-16 18:54:02.825528] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.503 "name": "raid_bdev1", 00:13:19.503 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:19.503 "strip_size_kb": 0, 00:13:19.503 "state": "online", 00:13:19.503 "raid_level": "raid1", 00:13:19.503 "superblock": true, 00:13:19.503 "num_base_bdevs": 4, 00:13:19.503 "num_base_bdevs_discovered": 3, 00:13:19.503 "num_base_bdevs_operational": 3, 00:13:19.503 "process": { 00:13:19.503 "type": "rebuild", 00:13:19.503 "target": "spare", 00:13:19.503 "progress": { 00:13:19.503 "blocks": 24576, 00:13:19.503 "percent": 38 00:13:19.503 } 00:13:19.503 }, 00:13:19.503 "base_bdevs_list": [ 00:13:19.503 { 00:13:19.503 "name": "spare", 00:13:19.503 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:19.503 "is_configured": true, 00:13:19.503 "data_offset": 2048, 00:13:19.503 "data_size": 63488 00:13:19.503 }, 00:13:19.503 { 00:13:19.503 "name": null, 00:13:19.503 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.503 "is_configured": false, 00:13:19.503 "data_offset": 0, 00:13:19.503 "data_size": 63488 00:13:19.503 }, 00:13:19.503 { 00:13:19.503 "name": "BaseBdev3", 00:13:19.503 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:19.503 "is_configured": true, 00:13:19.503 "data_offset": 2048, 00:13:19.503 "data_size": 63488 00:13:19.503 }, 00:13:19.503 { 00:13:19.503 "name": "BaseBdev4", 00:13:19.503 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:19.503 "is_configured": true, 00:13:19.503 "data_offset": 2048, 00:13:19.503 "data_size": 63488 00:13:19.503 } 00:13:19.503 ] 00:13:19.503 }' 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.503 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=444 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.763 18:54:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.763 18:54:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.763 18:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:19.763 "name": "raid_bdev1", 00:13:19.763 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:19.763 "strip_size_kb": 0, 00:13:19.763 "state": "online", 00:13:19.763 "raid_level": "raid1", 00:13:19.763 "superblock": true, 00:13:19.763 "num_base_bdevs": 4, 00:13:19.763 "num_base_bdevs_discovered": 3, 00:13:19.763 "num_base_bdevs_operational": 3, 00:13:19.763 "process": { 00:13:19.763 "type": "rebuild", 00:13:19.763 "target": "spare", 00:13:19.763 "progress": { 00:13:19.763 "blocks": 26624, 00:13:19.763 "percent": 41 00:13:19.763 } 00:13:19.763 }, 00:13:19.763 "base_bdevs_list": [ 00:13:19.763 { 00:13:19.763 "name": "spare", 00:13:19.763 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:19.763 "is_configured": true, 00:13:19.763 "data_offset": 2048, 00:13:19.763 "data_size": 63488 00:13:19.763 }, 00:13:19.763 { 00:13:19.763 "name": null, 00:13:19.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.763 "is_configured": false, 00:13:19.763 "data_offset": 0, 00:13:19.763 "data_size": 63488 00:13:19.763 }, 00:13:19.763 { 00:13:19.763 "name": "BaseBdev3", 00:13:19.763 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:19.763 "is_configured": true, 00:13:19.763 "data_offset": 2048, 00:13:19.763 "data_size": 63488 00:13:19.763 }, 00:13:19.763 { 00:13:19.763 "name": "BaseBdev4", 00:13:19.763 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:19.763 "is_configured": true, 00:13:19.763 "data_offset": 2048, 00:13:19.763 "data_size": 63488 00:13:19.763 } 00:13:19.763 ] 00:13:19.763 }' 00:13:19.763 18:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:19.763 18:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:19.763 18:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:19.763 18:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:19.763 18:54:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:20.702 "name": "raid_bdev1", 00:13:20.702 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:20.702 "strip_size_kb": 0, 00:13:20.702 "state": "online", 00:13:20.702 "raid_level": "raid1", 00:13:20.702 "superblock": true, 00:13:20.702 "num_base_bdevs": 4, 00:13:20.702 "num_base_bdevs_discovered": 3, 00:13:20.702 "num_base_bdevs_operational": 3, 00:13:20.702 "process": { 00:13:20.702 "type": "rebuild", 00:13:20.702 "target": "spare", 00:13:20.702 "progress": { 00:13:20.702 "blocks": 49152, 00:13:20.702 "percent": 77 00:13:20.702 } 00:13:20.702 }, 00:13:20.702 "base_bdevs_list": [ 00:13:20.702 { 00:13:20.702 "name": "spare", 00:13:20.702 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:20.702 "is_configured": true, 00:13:20.702 "data_offset": 2048, 00:13:20.702 "data_size": 63488 00:13:20.702 }, 00:13:20.702 { 00:13:20.702 "name": null, 00:13:20.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.702 "is_configured": false, 00:13:20.702 "data_offset": 0, 00:13:20.702 "data_size": 63488 00:13:20.702 }, 00:13:20.702 { 00:13:20.702 "name": "BaseBdev3", 00:13:20.702 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:20.702 "is_configured": true, 00:13:20.702 "data_offset": 2048, 00:13:20.702 "data_size": 63488 00:13:20.702 }, 00:13:20.702 { 00:13:20.702 "name": "BaseBdev4", 00:13:20.702 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:20.702 "is_configured": true, 00:13:20.702 "data_offset": 2048, 00:13:20.702 "data_size": 63488 00:13:20.702 } 00:13:20.702 ] 00:13:20.702 }' 00:13:20.702 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:20.962 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:20.962 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:20.962 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:20.962 18:54:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:21.541 [2024-11-16 18:54:04.733196] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:21.541 [2024-11-16 18:54:04.733341] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:21.541 [2024-11-16 18:54:04.733516] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.812 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.071 "name": "raid_bdev1", 00:13:22.071 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:22.071 "strip_size_kb": 0, 00:13:22.071 "state": "online", 00:13:22.071 "raid_level": "raid1", 00:13:22.071 "superblock": true, 00:13:22.071 "num_base_bdevs": 4, 00:13:22.071 "num_base_bdevs_discovered": 3, 00:13:22.071 "num_base_bdevs_operational": 3, 00:13:22.071 "base_bdevs_list": [ 00:13:22.071 { 00:13:22.071 "name": "spare", 00:13:22.071 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:22.071 "is_configured": true, 00:13:22.071 "data_offset": 2048, 00:13:22.071 "data_size": 63488 00:13:22.071 }, 00:13:22.071 { 00:13:22.071 "name": null, 00:13:22.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.071 "is_configured": false, 00:13:22.071 "data_offset": 0, 00:13:22.071 "data_size": 63488 00:13:22.071 }, 00:13:22.071 { 00:13:22.071 "name": "BaseBdev3", 00:13:22.071 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:22.071 "is_configured": true, 00:13:22.071 "data_offset": 2048, 00:13:22.071 "data_size": 63488 00:13:22.071 }, 00:13:22.071 { 00:13:22.071 "name": "BaseBdev4", 00:13:22.071 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:22.071 "is_configured": true, 00:13:22.071 "data_offset": 2048, 00:13:22.071 "data_size": 63488 00:13:22.071 } 00:13:22.071 ] 00:13:22.071 }' 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:22.071 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.072 "name": "raid_bdev1", 00:13:22.072 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:22.072 "strip_size_kb": 0, 00:13:22.072 "state": "online", 00:13:22.072 "raid_level": "raid1", 00:13:22.072 "superblock": true, 00:13:22.072 "num_base_bdevs": 4, 00:13:22.072 "num_base_bdevs_discovered": 3, 00:13:22.072 "num_base_bdevs_operational": 3, 00:13:22.072 "base_bdevs_list": [ 00:13:22.072 { 00:13:22.072 "name": "spare", 00:13:22.072 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:22.072 "is_configured": true, 00:13:22.072 "data_offset": 2048, 00:13:22.072 "data_size": 63488 00:13:22.072 }, 00:13:22.072 { 00:13:22.072 "name": null, 00:13:22.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.072 "is_configured": false, 00:13:22.072 "data_offset": 0, 00:13:22.072 "data_size": 63488 00:13:22.072 }, 00:13:22.072 { 00:13:22.072 "name": "BaseBdev3", 00:13:22.072 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:22.072 "is_configured": true, 00:13:22.072 "data_offset": 2048, 00:13:22.072 "data_size": 63488 00:13:22.072 }, 00:13:22.072 { 00:13:22.072 "name": "BaseBdev4", 00:13:22.072 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:22.072 "is_configured": true, 00:13:22.072 "data_offset": 2048, 00:13:22.072 "data_size": 63488 00:13:22.072 } 00:13:22.072 ] 00:13:22.072 }' 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.072 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.331 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.331 "name": "raid_bdev1", 00:13:22.331 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:22.331 "strip_size_kb": 0, 00:13:22.331 "state": "online", 00:13:22.331 "raid_level": "raid1", 00:13:22.331 "superblock": true, 00:13:22.331 "num_base_bdevs": 4, 00:13:22.331 "num_base_bdevs_discovered": 3, 00:13:22.331 "num_base_bdevs_operational": 3, 00:13:22.331 "base_bdevs_list": [ 00:13:22.331 { 00:13:22.331 "name": "spare", 00:13:22.331 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:22.331 "is_configured": true, 00:13:22.331 "data_offset": 2048, 00:13:22.331 "data_size": 63488 00:13:22.331 }, 00:13:22.331 { 00:13:22.331 "name": null, 00:13:22.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.331 "is_configured": false, 00:13:22.331 "data_offset": 0, 00:13:22.331 "data_size": 63488 00:13:22.331 }, 00:13:22.331 { 00:13:22.331 "name": "BaseBdev3", 00:13:22.331 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:22.331 "is_configured": true, 00:13:22.331 "data_offset": 2048, 00:13:22.331 "data_size": 63488 00:13:22.331 }, 00:13:22.331 { 00:13:22.331 "name": "BaseBdev4", 00:13:22.331 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:22.331 "is_configured": true, 00:13:22.331 "data_offset": 2048, 00:13:22.331 "data_size": 63488 00:13:22.331 } 00:13:22.331 ] 00:13:22.331 }' 00:13:22.331 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.331 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.590 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:22.590 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.590 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.590 [2024-11-16 18:54:05.980161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:22.590 [2024-11-16 18:54:05.980240] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.590 [2024-11-16 18:54:05.980344] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.590 [2024-11-16 18:54:05.980437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:22.590 [2024-11-16 18:54:05.980482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:22.590 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.590 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:22.590 18:54:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.590 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.590 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.590 18:54:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.590 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:22.849 /dev/nbd0 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.849 1+0 records in 00:13:22.849 1+0 records out 00:13:22.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201456 s, 20.3 MB/s 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:22.849 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:23.109 /dev/nbd1 00:13:23.109 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:23.109 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:23.109 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:23.109 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:23.109 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:23.109 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:23.109 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:23.109 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:23.109 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.110 1+0 records in 00:13:23.110 1+0 records out 00:13:23.110 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420035 s, 9.8 MB/s 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:23.110 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:23.370 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:23.370 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:23.370 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:23.370 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:23.370 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:23.370 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.370 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:23.630 18:54:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.890 [2024-11-16 18:54:07.160051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:23.890 [2024-11-16 18:54:07.160107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:23.890 [2024-11-16 18:54:07.160130] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:23.890 [2024-11-16 18:54:07.160140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:23.890 [2024-11-16 18:54:07.162648] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:23.890 [2024-11-16 18:54:07.162694] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:23.890 [2024-11-16 18:54:07.162790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:23.890 [2024-11-16 18:54:07.162857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.890 [2024-11-16 18:54:07.162998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:23.890 [2024-11-16 18:54:07.163094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.890 spare 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.890 [2024-11-16 18:54:07.262982] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:23.890 [2024-11-16 18:54:07.263058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:23.890 [2024-11-16 18:54:07.263368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:23.890 [2024-11-16 18:54:07.263545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:23.890 [2024-11-16 18:54:07.263560] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:23.890 [2024-11-16 18:54:07.263751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.890 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.891 "name": "raid_bdev1", 00:13:23.891 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:23.891 "strip_size_kb": 0, 00:13:23.891 "state": "online", 00:13:23.891 "raid_level": "raid1", 00:13:23.891 "superblock": true, 00:13:23.891 "num_base_bdevs": 4, 00:13:23.891 "num_base_bdevs_discovered": 3, 00:13:23.891 "num_base_bdevs_operational": 3, 00:13:23.891 "base_bdevs_list": [ 00:13:23.891 { 00:13:23.891 "name": "spare", 00:13:23.891 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:23.891 "is_configured": true, 00:13:23.891 "data_offset": 2048, 00:13:23.891 "data_size": 63488 00:13:23.891 }, 00:13:23.891 { 00:13:23.891 "name": null, 00:13:23.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.891 "is_configured": false, 00:13:23.891 "data_offset": 2048, 00:13:23.891 "data_size": 63488 00:13:23.891 }, 00:13:23.891 { 00:13:23.891 "name": "BaseBdev3", 00:13:23.891 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:23.891 "is_configured": true, 00:13:23.891 "data_offset": 2048, 00:13:23.891 "data_size": 63488 00:13:23.891 }, 00:13:23.891 { 00:13:23.891 "name": "BaseBdev4", 00:13:23.891 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:23.891 "is_configured": true, 00:13:23.891 "data_offset": 2048, 00:13:23.891 "data_size": 63488 00:13:23.891 } 00:13:23.891 ] 00:13:23.891 }' 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.891 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.461 "name": "raid_bdev1", 00:13:24.461 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:24.461 "strip_size_kb": 0, 00:13:24.461 "state": "online", 00:13:24.461 "raid_level": "raid1", 00:13:24.461 "superblock": true, 00:13:24.461 "num_base_bdevs": 4, 00:13:24.461 "num_base_bdevs_discovered": 3, 00:13:24.461 "num_base_bdevs_operational": 3, 00:13:24.461 "base_bdevs_list": [ 00:13:24.461 { 00:13:24.461 "name": "spare", 00:13:24.461 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:24.461 "is_configured": true, 00:13:24.461 "data_offset": 2048, 00:13:24.461 "data_size": 63488 00:13:24.461 }, 00:13:24.461 { 00:13:24.461 "name": null, 00:13:24.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.461 "is_configured": false, 00:13:24.461 "data_offset": 2048, 00:13:24.461 "data_size": 63488 00:13:24.461 }, 00:13:24.461 { 00:13:24.461 "name": "BaseBdev3", 00:13:24.461 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:24.461 "is_configured": true, 00:13:24.461 "data_offset": 2048, 00:13:24.461 "data_size": 63488 00:13:24.461 }, 00:13:24.461 { 00:13:24.461 "name": "BaseBdev4", 00:13:24.461 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:24.461 "is_configured": true, 00:13:24.461 "data_offset": 2048, 00:13:24.461 "data_size": 63488 00:13:24.461 } 00:13:24.461 ] 00:13:24.461 }' 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.461 [2024-11-16 18:54:07.918955] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.461 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.721 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.721 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.721 "name": "raid_bdev1", 00:13:24.721 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:24.721 "strip_size_kb": 0, 00:13:24.721 "state": "online", 00:13:24.721 "raid_level": "raid1", 00:13:24.721 "superblock": true, 00:13:24.721 "num_base_bdevs": 4, 00:13:24.721 "num_base_bdevs_discovered": 2, 00:13:24.721 "num_base_bdevs_operational": 2, 00:13:24.721 "base_bdevs_list": [ 00:13:24.721 { 00:13:24.721 "name": null, 00:13:24.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.721 "is_configured": false, 00:13:24.721 "data_offset": 0, 00:13:24.721 "data_size": 63488 00:13:24.721 }, 00:13:24.721 { 00:13:24.721 "name": null, 00:13:24.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.721 "is_configured": false, 00:13:24.721 "data_offset": 2048, 00:13:24.721 "data_size": 63488 00:13:24.721 }, 00:13:24.721 { 00:13:24.721 "name": "BaseBdev3", 00:13:24.721 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:24.721 "is_configured": true, 00:13:24.721 "data_offset": 2048, 00:13:24.721 "data_size": 63488 00:13:24.721 }, 00:13:24.721 { 00:13:24.721 "name": "BaseBdev4", 00:13:24.721 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:24.721 "is_configured": true, 00:13:24.721 "data_offset": 2048, 00:13:24.721 "data_size": 63488 00:13:24.721 } 00:13:24.721 ] 00:13:24.721 }' 00:13:24.721 18:54:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.721 18:54:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.981 18:54:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:24.981 18:54:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.981 18:54:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:24.981 [2024-11-16 18:54:08.398140] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.981 [2024-11-16 18:54:08.398401] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:24.981 [2024-11-16 18:54:08.398464] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:24.981 [2024-11-16 18:54:08.398531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:24.981 [2024-11-16 18:54:08.412819] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:13:24.981 18:54:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.981 18:54:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:24.981 [2024-11-16 18:54:08.414712] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.361 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.361 "name": "raid_bdev1", 00:13:26.361 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:26.361 "strip_size_kb": 0, 00:13:26.361 "state": "online", 00:13:26.362 "raid_level": "raid1", 00:13:26.362 "superblock": true, 00:13:26.362 "num_base_bdevs": 4, 00:13:26.362 "num_base_bdevs_discovered": 3, 00:13:26.362 "num_base_bdevs_operational": 3, 00:13:26.362 "process": { 00:13:26.362 "type": "rebuild", 00:13:26.362 "target": "spare", 00:13:26.362 "progress": { 00:13:26.362 "blocks": 20480, 00:13:26.362 "percent": 32 00:13:26.362 } 00:13:26.362 }, 00:13:26.362 "base_bdevs_list": [ 00:13:26.362 { 00:13:26.362 "name": "spare", 00:13:26.362 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:26.362 "is_configured": true, 00:13:26.362 "data_offset": 2048, 00:13:26.362 "data_size": 63488 00:13:26.362 }, 00:13:26.362 { 00:13:26.362 "name": null, 00:13:26.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.362 "is_configured": false, 00:13:26.362 "data_offset": 2048, 00:13:26.362 "data_size": 63488 00:13:26.362 }, 00:13:26.362 { 00:13:26.362 "name": "BaseBdev3", 00:13:26.362 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:26.362 "is_configured": true, 00:13:26.362 "data_offset": 2048, 00:13:26.362 "data_size": 63488 00:13:26.362 }, 00:13:26.362 { 00:13:26.362 "name": "BaseBdev4", 00:13:26.362 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:26.362 "is_configured": true, 00:13:26.362 "data_offset": 2048, 00:13:26.362 "data_size": 63488 00:13:26.362 } 00:13:26.362 ] 00:13:26.362 }' 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.362 [2024-11-16 18:54:09.582179] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.362 [2024-11-16 18:54:09.619662] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:26.362 [2024-11-16 18:54:09.619716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.362 [2024-11-16 18:54:09.619733] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:26.362 [2024-11-16 18:54:09.619740] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.362 "name": "raid_bdev1", 00:13:26.362 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:26.362 "strip_size_kb": 0, 00:13:26.362 "state": "online", 00:13:26.362 "raid_level": "raid1", 00:13:26.362 "superblock": true, 00:13:26.362 "num_base_bdevs": 4, 00:13:26.362 "num_base_bdevs_discovered": 2, 00:13:26.362 "num_base_bdevs_operational": 2, 00:13:26.362 "base_bdevs_list": [ 00:13:26.362 { 00:13:26.362 "name": null, 00:13:26.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.362 "is_configured": false, 00:13:26.362 "data_offset": 0, 00:13:26.362 "data_size": 63488 00:13:26.362 }, 00:13:26.362 { 00:13:26.362 "name": null, 00:13:26.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.362 "is_configured": false, 00:13:26.362 "data_offset": 2048, 00:13:26.362 "data_size": 63488 00:13:26.362 }, 00:13:26.362 { 00:13:26.362 "name": "BaseBdev3", 00:13:26.362 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:26.362 "is_configured": true, 00:13:26.362 "data_offset": 2048, 00:13:26.362 "data_size": 63488 00:13:26.362 }, 00:13:26.362 { 00:13:26.362 "name": "BaseBdev4", 00:13:26.362 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:26.362 "is_configured": true, 00:13:26.362 "data_offset": 2048, 00:13:26.362 "data_size": 63488 00:13:26.362 } 00:13:26.362 ] 00:13:26.362 }' 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.362 18:54:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.933 18:54:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:26.933 18:54:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.933 18:54:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:26.933 [2024-11-16 18:54:10.119971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:26.933 [2024-11-16 18:54:10.120094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.933 [2024-11-16 18:54:10.120129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:13:26.933 [2024-11-16 18:54:10.120141] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.933 [2024-11-16 18:54:10.120609] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.933 [2024-11-16 18:54:10.120628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:26.933 [2024-11-16 18:54:10.120769] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:26.933 [2024-11-16 18:54:10.120786] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:26.933 [2024-11-16 18:54:10.120806] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:26.933 [2024-11-16 18:54:10.120838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:26.933 [2024-11-16 18:54:10.134920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:13:26.933 spare 00:13:26.933 18:54:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.933 18:54:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:26.933 [2024-11-16 18:54:10.136932] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.873 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.873 "name": "raid_bdev1", 00:13:27.873 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:27.873 "strip_size_kb": 0, 00:13:27.873 "state": "online", 00:13:27.873 "raid_level": "raid1", 00:13:27.873 "superblock": true, 00:13:27.873 "num_base_bdevs": 4, 00:13:27.873 "num_base_bdevs_discovered": 3, 00:13:27.873 "num_base_bdevs_operational": 3, 00:13:27.873 "process": { 00:13:27.873 "type": "rebuild", 00:13:27.873 "target": "spare", 00:13:27.873 "progress": { 00:13:27.873 "blocks": 20480, 00:13:27.873 "percent": 32 00:13:27.873 } 00:13:27.873 }, 00:13:27.873 "base_bdevs_list": [ 00:13:27.873 { 00:13:27.873 "name": "spare", 00:13:27.873 "uuid": "c9514ce9-a476-5ce8-a13c-461d265f0292", 00:13:27.873 "is_configured": true, 00:13:27.873 "data_offset": 2048, 00:13:27.873 "data_size": 63488 00:13:27.873 }, 00:13:27.873 { 00:13:27.873 "name": null, 00:13:27.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.873 "is_configured": false, 00:13:27.873 "data_offset": 2048, 00:13:27.873 "data_size": 63488 00:13:27.873 }, 00:13:27.873 { 00:13:27.873 "name": "BaseBdev3", 00:13:27.874 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:27.874 "is_configured": true, 00:13:27.874 "data_offset": 2048, 00:13:27.874 "data_size": 63488 00:13:27.874 }, 00:13:27.874 { 00:13:27.874 "name": "BaseBdev4", 00:13:27.874 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:27.874 "is_configured": true, 00:13:27.874 "data_offset": 2048, 00:13:27.874 "data_size": 63488 00:13:27.874 } 00:13:27.874 ] 00:13:27.874 }' 00:13:27.874 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.874 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.874 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.874 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.874 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:27.874 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.874 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.874 [2024-11-16 18:54:11.280396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.874 [2024-11-16 18:54:11.341840] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:27.874 [2024-11-16 18:54:11.341902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.874 [2024-11-16 18:54:11.341918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:27.874 [2024-11-16 18:54:11.341927] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.134 "name": "raid_bdev1", 00:13:28.134 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:28.134 "strip_size_kb": 0, 00:13:28.134 "state": "online", 00:13:28.134 "raid_level": "raid1", 00:13:28.134 "superblock": true, 00:13:28.134 "num_base_bdevs": 4, 00:13:28.134 "num_base_bdevs_discovered": 2, 00:13:28.134 "num_base_bdevs_operational": 2, 00:13:28.134 "base_bdevs_list": [ 00:13:28.134 { 00:13:28.134 "name": null, 00:13:28.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.134 "is_configured": false, 00:13:28.134 "data_offset": 0, 00:13:28.134 "data_size": 63488 00:13:28.134 }, 00:13:28.134 { 00:13:28.134 "name": null, 00:13:28.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.134 "is_configured": false, 00:13:28.134 "data_offset": 2048, 00:13:28.134 "data_size": 63488 00:13:28.134 }, 00:13:28.134 { 00:13:28.134 "name": "BaseBdev3", 00:13:28.134 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:28.134 "is_configured": true, 00:13:28.134 "data_offset": 2048, 00:13:28.134 "data_size": 63488 00:13:28.134 }, 00:13:28.134 { 00:13:28.134 "name": "BaseBdev4", 00:13:28.134 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:28.134 "is_configured": true, 00:13:28.134 "data_offset": 2048, 00:13:28.134 "data_size": 63488 00:13:28.134 } 00:13:28.134 ] 00:13:28.134 }' 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.134 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.394 "name": "raid_bdev1", 00:13:28.394 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:28.394 "strip_size_kb": 0, 00:13:28.394 "state": "online", 00:13:28.394 "raid_level": "raid1", 00:13:28.394 "superblock": true, 00:13:28.394 "num_base_bdevs": 4, 00:13:28.394 "num_base_bdevs_discovered": 2, 00:13:28.394 "num_base_bdevs_operational": 2, 00:13:28.394 "base_bdevs_list": [ 00:13:28.394 { 00:13:28.394 "name": null, 00:13:28.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.394 "is_configured": false, 00:13:28.394 "data_offset": 0, 00:13:28.394 "data_size": 63488 00:13:28.394 }, 00:13:28.394 { 00:13:28.394 "name": null, 00:13:28.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.394 "is_configured": false, 00:13:28.394 "data_offset": 2048, 00:13:28.394 "data_size": 63488 00:13:28.394 }, 00:13:28.394 { 00:13:28.394 "name": "BaseBdev3", 00:13:28.394 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:28.394 "is_configured": true, 00:13:28.394 "data_offset": 2048, 00:13:28.394 "data_size": 63488 00:13:28.394 }, 00:13:28.394 { 00:13:28.394 "name": "BaseBdev4", 00:13:28.394 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:28.394 "is_configured": true, 00:13:28.394 "data_offset": 2048, 00:13:28.394 "data_size": 63488 00:13:28.394 } 00:13:28.394 ] 00:13:28.394 }' 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.394 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.654 [2024-11-16 18:54:11.906924] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:28.654 [2024-11-16 18:54:11.907059] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.654 [2024-11-16 18:54:11.907087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:13:28.654 [2024-11-16 18:54:11.907098] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.654 [2024-11-16 18:54:11.907551] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.654 [2024-11-16 18:54:11.907571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:28.654 [2024-11-16 18:54:11.907683] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:28.654 [2024-11-16 18:54:11.907700] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:28.654 [2024-11-16 18:54:11.907708] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:28.654 [2024-11-16 18:54:11.907733] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:28.654 BaseBdev1 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.654 18:54:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.598 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.598 "name": "raid_bdev1", 00:13:29.598 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:29.598 "strip_size_kb": 0, 00:13:29.598 "state": "online", 00:13:29.598 "raid_level": "raid1", 00:13:29.598 "superblock": true, 00:13:29.598 "num_base_bdevs": 4, 00:13:29.598 "num_base_bdevs_discovered": 2, 00:13:29.598 "num_base_bdevs_operational": 2, 00:13:29.598 "base_bdevs_list": [ 00:13:29.598 { 00:13:29.598 "name": null, 00:13:29.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.598 "is_configured": false, 00:13:29.598 "data_offset": 0, 00:13:29.598 "data_size": 63488 00:13:29.598 }, 00:13:29.598 { 00:13:29.598 "name": null, 00:13:29.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.598 "is_configured": false, 00:13:29.598 "data_offset": 2048, 00:13:29.598 "data_size": 63488 00:13:29.598 }, 00:13:29.598 { 00:13:29.598 "name": "BaseBdev3", 00:13:29.598 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:29.598 "is_configured": true, 00:13:29.598 "data_offset": 2048, 00:13:29.599 "data_size": 63488 00:13:29.599 }, 00:13:29.599 { 00:13:29.599 "name": "BaseBdev4", 00:13:29.599 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:29.599 "is_configured": true, 00:13:29.599 "data_offset": 2048, 00:13:29.599 "data_size": 63488 00:13:29.599 } 00:13:29.599 ] 00:13:29.599 }' 00:13:29.599 18:54:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.599 18:54:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.174 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.174 "name": "raid_bdev1", 00:13:30.174 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:30.174 "strip_size_kb": 0, 00:13:30.174 "state": "online", 00:13:30.174 "raid_level": "raid1", 00:13:30.174 "superblock": true, 00:13:30.174 "num_base_bdevs": 4, 00:13:30.174 "num_base_bdevs_discovered": 2, 00:13:30.174 "num_base_bdevs_operational": 2, 00:13:30.174 "base_bdevs_list": [ 00:13:30.174 { 00:13:30.174 "name": null, 00:13:30.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.174 "is_configured": false, 00:13:30.174 "data_offset": 0, 00:13:30.174 "data_size": 63488 00:13:30.174 }, 00:13:30.174 { 00:13:30.174 "name": null, 00:13:30.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.174 "is_configured": false, 00:13:30.174 "data_offset": 2048, 00:13:30.174 "data_size": 63488 00:13:30.174 }, 00:13:30.175 { 00:13:30.175 "name": "BaseBdev3", 00:13:30.175 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:30.175 "is_configured": true, 00:13:30.175 "data_offset": 2048, 00:13:30.175 "data_size": 63488 00:13:30.175 }, 00:13:30.175 { 00:13:30.175 "name": "BaseBdev4", 00:13:30.175 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:30.175 "is_configured": true, 00:13:30.175 "data_offset": 2048, 00:13:30.175 "data_size": 63488 00:13:30.175 } 00:13:30.175 ] 00:13:30.175 }' 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.175 [2024-11-16 18:54:13.472227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.175 [2024-11-16 18:54:13.472492] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:13:30.175 [2024-11-16 18:54:13.472512] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:30.175 request: 00:13:30.175 { 00:13:30.175 "base_bdev": "BaseBdev1", 00:13:30.175 "raid_bdev": "raid_bdev1", 00:13:30.175 "method": "bdev_raid_add_base_bdev", 00:13:30.175 "req_id": 1 00:13:30.175 } 00:13:30.175 Got JSON-RPC error response 00:13:30.175 response: 00:13:30.175 { 00:13:30.175 "code": -22, 00:13:30.175 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:30.175 } 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:30.175 18:54:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.115 "name": "raid_bdev1", 00:13:31.115 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:31.115 "strip_size_kb": 0, 00:13:31.115 "state": "online", 00:13:31.115 "raid_level": "raid1", 00:13:31.115 "superblock": true, 00:13:31.115 "num_base_bdevs": 4, 00:13:31.115 "num_base_bdevs_discovered": 2, 00:13:31.115 "num_base_bdevs_operational": 2, 00:13:31.115 "base_bdevs_list": [ 00:13:31.115 { 00:13:31.115 "name": null, 00:13:31.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.115 "is_configured": false, 00:13:31.115 "data_offset": 0, 00:13:31.115 "data_size": 63488 00:13:31.115 }, 00:13:31.115 { 00:13:31.115 "name": null, 00:13:31.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.115 "is_configured": false, 00:13:31.115 "data_offset": 2048, 00:13:31.115 "data_size": 63488 00:13:31.115 }, 00:13:31.115 { 00:13:31.115 "name": "BaseBdev3", 00:13:31.115 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:31.115 "is_configured": true, 00:13:31.115 "data_offset": 2048, 00:13:31.115 "data_size": 63488 00:13:31.115 }, 00:13:31.115 { 00:13:31.115 "name": "BaseBdev4", 00:13:31.115 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:31.115 "is_configured": true, 00:13:31.115 "data_offset": 2048, 00:13:31.115 "data_size": 63488 00:13:31.115 } 00:13:31.115 ] 00:13:31.115 }' 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.115 18:54:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.686 "name": "raid_bdev1", 00:13:31.686 "uuid": "e9935de1-e15a-4843-88e7-e3c060bec639", 00:13:31.686 "strip_size_kb": 0, 00:13:31.686 "state": "online", 00:13:31.686 "raid_level": "raid1", 00:13:31.686 "superblock": true, 00:13:31.686 "num_base_bdevs": 4, 00:13:31.686 "num_base_bdevs_discovered": 2, 00:13:31.686 "num_base_bdevs_operational": 2, 00:13:31.686 "base_bdevs_list": [ 00:13:31.686 { 00:13:31.686 "name": null, 00:13:31.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.686 "is_configured": false, 00:13:31.686 "data_offset": 0, 00:13:31.686 "data_size": 63488 00:13:31.686 }, 00:13:31.686 { 00:13:31.686 "name": null, 00:13:31.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:31.686 "is_configured": false, 00:13:31.686 "data_offset": 2048, 00:13:31.686 "data_size": 63488 00:13:31.686 }, 00:13:31.686 { 00:13:31.686 "name": "BaseBdev3", 00:13:31.686 "uuid": "89a49c93-16f2-5ac2-977f-67339187d110", 00:13:31.686 "is_configured": true, 00:13:31.686 "data_offset": 2048, 00:13:31.686 "data_size": 63488 00:13:31.686 }, 00:13:31.686 { 00:13:31.686 "name": "BaseBdev4", 00:13:31.686 "uuid": "8ef25a98-7df8-507d-926e-12ed3c15095a", 00:13:31.686 "is_configured": true, 00:13:31.686 "data_offset": 2048, 00:13:31.686 "data_size": 63488 00:13:31.686 } 00:13:31.686 ] 00:13:31.686 }' 00:13:31.686 18:54:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77717 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77717 ']' 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77717 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77717 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:31.686 killing process with pid 77717 00:13:31.686 Received shutdown signal, test time was about 60.000000 seconds 00:13:31.686 00:13:31.686 Latency(us) 00:13:31.686 [2024-11-16T18:54:15.158Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:31.686 [2024-11-16T18:54:15.158Z] =================================================================================================================== 00:13:31.686 [2024-11-16T18:54:15.158Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77717' 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77717 00:13:31.686 [2024-11-16 18:54:15.101742] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.686 [2024-11-16 18:54:15.101871] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.686 18:54:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77717 00:13:31.686 [2024-11-16 18:54:15.101937] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.686 [2024-11-16 18:54:15.101946] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:32.256 [2024-11-16 18:54:15.564852] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:33.195 18:54:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:33.195 00:13:33.195 real 0m24.706s 00:13:33.195 user 0m29.960s 00:13:33.195 sys 0m3.750s 00:13:33.195 18:54:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:33.195 18:54:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.195 ************************************ 00:13:33.195 END TEST raid_rebuild_test_sb 00:13:33.195 ************************************ 00:13:33.195 18:54:16 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:13:33.455 18:54:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:33.455 18:54:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:33.455 18:54:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:33.455 ************************************ 00:13:33.455 START TEST raid_rebuild_test_io 00:13:33.455 ************************************ 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.455 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78465 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78465 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78465 ']' 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.456 18:54:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.456 [2024-11-16 18:54:16.788800] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:33.456 [2024-11-16 18:54:16.789021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78465 ] 00:13:33.456 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:33.456 Zero copy mechanism will not be used. 00:13:33.716 [2024-11-16 18:54:16.963189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.716 [2024-11-16 18:54:17.078418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.981 [2024-11-16 18:54:17.270311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.981 [2024-11-16 18:54:17.270452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.242 BaseBdev1_malloc 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.242 [2024-11-16 18:54:17.655589] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:34.242 [2024-11-16 18:54:17.655669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.242 [2024-11-16 18:54:17.655694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:34.242 [2024-11-16 18:54:17.655705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.242 [2024-11-16 18:54:17.657747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.242 [2024-11-16 18:54:17.657785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:34.242 BaseBdev1 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.242 BaseBdev2_malloc 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.242 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.242 [2024-11-16 18:54:17.711326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:34.242 [2024-11-16 18:54:17.711444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.242 [2024-11-16 18:54:17.711466] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:34.242 [2024-11-16 18:54:17.711479] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.242 [2024-11-16 18:54:17.713507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.502 [2024-11-16 18:54:17.713588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:34.502 BaseBdev2 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 BaseBdev3_malloc 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 [2024-11-16 18:54:17.777318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:34.502 [2024-11-16 18:54:17.777433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.502 [2024-11-16 18:54:17.777457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:34.502 [2024-11-16 18:54:17.777467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.502 [2024-11-16 18:54:17.779445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.502 [2024-11-16 18:54:17.779487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:34.502 BaseBdev3 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 BaseBdev4_malloc 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 [2024-11-16 18:54:17.832752] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:34.502 [2024-11-16 18:54:17.832807] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.502 [2024-11-16 18:54:17.832826] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:34.502 [2024-11-16 18:54:17.832836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.502 [2024-11-16 18:54:17.834810] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.502 [2024-11-16 18:54:17.834848] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:34.502 BaseBdev4 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 spare_malloc 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 spare_delay 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 [2024-11-16 18:54:17.899256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:34.502 [2024-11-16 18:54:17.899316] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.502 [2024-11-16 18:54:17.899334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:34.502 [2024-11-16 18:54:17.899344] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.502 [2024-11-16 18:54:17.901335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.502 [2024-11-16 18:54:17.901375] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:34.502 spare 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 [2024-11-16 18:54:17.911285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.502 [2024-11-16 18:54:17.912977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.502 [2024-11-16 18:54:17.913037] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:34.502 [2024-11-16 18:54:17.913087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:34.502 [2024-11-16 18:54:17.913159] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:34.502 [2024-11-16 18:54:17.913171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:34.502 [2024-11-16 18:54:17.913400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:34.502 [2024-11-16 18:54:17.913559] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:34.502 [2024-11-16 18:54:17.913571] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:34.502 [2024-11-16 18:54:17.913718] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.502 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.502 "name": "raid_bdev1", 00:13:34.502 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:34.502 "strip_size_kb": 0, 00:13:34.502 "state": "online", 00:13:34.502 "raid_level": "raid1", 00:13:34.502 "superblock": false, 00:13:34.502 "num_base_bdevs": 4, 00:13:34.502 "num_base_bdevs_discovered": 4, 00:13:34.502 "num_base_bdevs_operational": 4, 00:13:34.502 "base_bdevs_list": [ 00:13:34.502 { 00:13:34.502 "name": "BaseBdev1", 00:13:34.502 "uuid": "065d8252-9cb4-5713-b141-defefff129e0", 00:13:34.502 "is_configured": true, 00:13:34.502 "data_offset": 0, 00:13:34.502 "data_size": 65536 00:13:34.502 }, 00:13:34.502 { 00:13:34.502 "name": "BaseBdev2", 00:13:34.502 "uuid": "70148e07-200e-594a-ba32-0292d765a1c7", 00:13:34.502 "is_configured": true, 00:13:34.502 "data_offset": 0, 00:13:34.502 "data_size": 65536 00:13:34.502 }, 00:13:34.502 { 00:13:34.503 "name": "BaseBdev3", 00:13:34.503 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:34.503 "is_configured": true, 00:13:34.503 "data_offset": 0, 00:13:34.503 "data_size": 65536 00:13:34.503 }, 00:13:34.503 { 00:13:34.503 "name": "BaseBdev4", 00:13:34.503 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:34.503 "is_configured": true, 00:13:34.503 "data_offset": 0, 00:13:34.503 "data_size": 65536 00:13:34.503 } 00:13:34.503 ] 00:13:34.503 }' 00:13:34.503 18:54:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.503 18:54:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:35.072 [2024-11-16 18:54:18.422787] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.072 [2024-11-16 18:54:18.522224] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.072 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.332 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.332 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.332 "name": "raid_bdev1", 00:13:35.332 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:35.332 "strip_size_kb": 0, 00:13:35.332 "state": "online", 00:13:35.332 "raid_level": "raid1", 00:13:35.332 "superblock": false, 00:13:35.332 "num_base_bdevs": 4, 00:13:35.332 "num_base_bdevs_discovered": 3, 00:13:35.332 "num_base_bdevs_operational": 3, 00:13:35.332 "base_bdevs_list": [ 00:13:35.332 { 00:13:35.332 "name": null, 00:13:35.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.332 "is_configured": false, 00:13:35.332 "data_offset": 0, 00:13:35.332 "data_size": 65536 00:13:35.332 }, 00:13:35.332 { 00:13:35.332 "name": "BaseBdev2", 00:13:35.332 "uuid": "70148e07-200e-594a-ba32-0292d765a1c7", 00:13:35.332 "is_configured": true, 00:13:35.332 "data_offset": 0, 00:13:35.332 "data_size": 65536 00:13:35.332 }, 00:13:35.332 { 00:13:35.332 "name": "BaseBdev3", 00:13:35.332 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:35.332 "is_configured": true, 00:13:35.332 "data_offset": 0, 00:13:35.332 "data_size": 65536 00:13:35.332 }, 00:13:35.332 { 00:13:35.332 "name": "BaseBdev4", 00:13:35.332 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:35.332 "is_configured": true, 00:13:35.332 "data_offset": 0, 00:13:35.332 "data_size": 65536 00:13:35.332 } 00:13:35.332 ] 00:13:35.332 }' 00:13:35.332 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.332 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.332 [2024-11-16 18:54:18.609920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:35.332 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:35.332 Zero copy mechanism will not be used. 00:13:35.332 Running I/O for 60 seconds... 00:13:35.592 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:35.592 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.592 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.592 [2024-11-16 18:54:18.933418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.592 18:54:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.592 18:54:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:35.592 [2024-11-16 18:54:19.000388] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:35.592 [2024-11-16 18:54:19.002409] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:35.876 [2024-11-16 18:54:19.117121] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:35.876 [2024-11-16 18:54:19.118655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:35.876 [2024-11-16 18:54:19.329402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:35.876 [2024-11-16 18:54:19.329845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:36.403 230.00 IOPS, 690.00 MiB/s [2024-11-16T18:54:19.875Z] [2024-11-16 18:54:19.689642] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:36.403 [2024-11-16 18:54:19.690023] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:36.663 [2024-11-16 18:54:19.968817] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:36.663 18:54:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:36.663 18:54:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:36.663 18:54:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:36.663 18:54:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:36.663 18:54:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:36.663 18:54:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.663 18:54:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.663 18:54:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.663 18:54:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.663 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.663 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:36.663 "name": "raid_bdev1", 00:13:36.663 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:36.663 "strip_size_kb": 0, 00:13:36.663 "state": "online", 00:13:36.663 "raid_level": "raid1", 00:13:36.663 "superblock": false, 00:13:36.663 "num_base_bdevs": 4, 00:13:36.663 "num_base_bdevs_discovered": 4, 00:13:36.663 "num_base_bdevs_operational": 4, 00:13:36.663 "process": { 00:13:36.663 "type": "rebuild", 00:13:36.663 "target": "spare", 00:13:36.663 "progress": { 00:13:36.663 "blocks": 14336, 00:13:36.663 "percent": 21 00:13:36.663 } 00:13:36.663 }, 00:13:36.663 "base_bdevs_list": [ 00:13:36.663 { 00:13:36.663 "name": "spare", 00:13:36.663 "uuid": "6ab6655f-dbe2-5600-9af3-e6604d630acd", 00:13:36.663 "is_configured": true, 00:13:36.663 "data_offset": 0, 00:13:36.663 "data_size": 65536 00:13:36.663 }, 00:13:36.663 { 00:13:36.663 "name": "BaseBdev2", 00:13:36.663 "uuid": "70148e07-200e-594a-ba32-0292d765a1c7", 00:13:36.663 "is_configured": true, 00:13:36.663 "data_offset": 0, 00:13:36.663 "data_size": 65536 00:13:36.663 }, 00:13:36.663 { 00:13:36.663 "name": "BaseBdev3", 00:13:36.663 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:36.663 "is_configured": true, 00:13:36.663 "data_offset": 0, 00:13:36.663 "data_size": 65536 00:13:36.663 }, 00:13:36.663 { 00:13:36.663 "name": "BaseBdev4", 00:13:36.663 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:36.663 "is_configured": true, 00:13:36.663 "data_offset": 0, 00:13:36.663 "data_size": 65536 00:13:36.663 } 00:13:36.663 ] 00:13:36.663 }' 00:13:36.663 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:36.663 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:36.663 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.923 [2024-11-16 18:54:20.139422] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.923 [2024-11-16 18:54:20.194870] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:36.923 [2024-11-16 18:54:20.297537] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:36.923 [2024-11-16 18:54:20.307346] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.923 [2024-11-16 18:54:20.307442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:36.923 [2024-11-16 18:54:20.307479] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:36.923 [2024-11-16 18:54:20.343283] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.923 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.183 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.183 "name": "raid_bdev1", 00:13:37.183 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:37.183 "strip_size_kb": 0, 00:13:37.183 "state": "online", 00:13:37.183 "raid_level": "raid1", 00:13:37.183 "superblock": false, 00:13:37.183 "num_base_bdevs": 4, 00:13:37.183 "num_base_bdevs_discovered": 3, 00:13:37.183 "num_base_bdevs_operational": 3, 00:13:37.183 "base_bdevs_list": [ 00:13:37.183 { 00:13:37.183 "name": null, 00:13:37.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.183 "is_configured": false, 00:13:37.183 "data_offset": 0, 00:13:37.183 "data_size": 65536 00:13:37.183 }, 00:13:37.183 { 00:13:37.183 "name": "BaseBdev2", 00:13:37.183 "uuid": "70148e07-200e-594a-ba32-0292d765a1c7", 00:13:37.183 "is_configured": true, 00:13:37.183 "data_offset": 0, 00:13:37.183 "data_size": 65536 00:13:37.183 }, 00:13:37.183 { 00:13:37.183 "name": "BaseBdev3", 00:13:37.183 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:37.183 "is_configured": true, 00:13:37.183 "data_offset": 0, 00:13:37.183 "data_size": 65536 00:13:37.183 }, 00:13:37.183 { 00:13:37.183 "name": "BaseBdev4", 00:13:37.183 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:37.183 "is_configured": true, 00:13:37.183 "data_offset": 0, 00:13:37.183 "data_size": 65536 00:13:37.183 } 00:13:37.183 ] 00:13:37.183 }' 00:13:37.183 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.183 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.443 185.50 IOPS, 556.50 MiB/s [2024-11-16T18:54:20.915Z] 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.443 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.443 "name": "raid_bdev1", 00:13:37.443 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:37.443 "strip_size_kb": 0, 00:13:37.443 "state": "online", 00:13:37.443 "raid_level": "raid1", 00:13:37.443 "superblock": false, 00:13:37.443 "num_base_bdevs": 4, 00:13:37.443 "num_base_bdevs_discovered": 3, 00:13:37.443 "num_base_bdevs_operational": 3, 00:13:37.443 "base_bdevs_list": [ 00:13:37.443 { 00:13:37.443 "name": null, 00:13:37.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.443 "is_configured": false, 00:13:37.443 "data_offset": 0, 00:13:37.443 "data_size": 65536 00:13:37.443 }, 00:13:37.443 { 00:13:37.443 "name": "BaseBdev2", 00:13:37.443 "uuid": "70148e07-200e-594a-ba32-0292d765a1c7", 00:13:37.443 "is_configured": true, 00:13:37.443 "data_offset": 0, 00:13:37.443 "data_size": 65536 00:13:37.443 }, 00:13:37.443 { 00:13:37.443 "name": "BaseBdev3", 00:13:37.443 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:37.443 "is_configured": true, 00:13:37.443 "data_offset": 0, 00:13:37.443 "data_size": 65536 00:13:37.443 }, 00:13:37.443 { 00:13:37.443 "name": "BaseBdev4", 00:13:37.443 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:37.443 "is_configured": true, 00:13:37.443 "data_offset": 0, 00:13:37.444 "data_size": 65536 00:13:37.444 } 00:13:37.444 ] 00:13:37.444 }' 00:13:37.444 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.708 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.708 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.708 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.708 18:54:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:37.708 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.708 18:54:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.708 [2024-11-16 18:54:21.005420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.708 18:54:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.708 18:54:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:37.708 [2024-11-16 18:54:21.078865] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:37.708 [2024-11-16 18:54:21.080886] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.974 [2024-11-16 18:54:21.183415] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:37.974 [2024-11-16 18:54:21.184030] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:37.974 [2024-11-16 18:54:21.300542] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:37.974 [2024-11-16 18:54:21.301345] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:38.233 181.67 IOPS, 545.00 MiB/s [2024-11-16T18:54:21.705Z] [2024-11-16 18:54:21.647080] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:38.494 [2024-11-16 18:54:21.867612] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.754 "name": "raid_bdev1", 00:13:38.754 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:38.754 "strip_size_kb": 0, 00:13:38.754 "state": "online", 00:13:38.754 "raid_level": "raid1", 00:13:38.754 "superblock": false, 00:13:38.754 "num_base_bdevs": 4, 00:13:38.754 "num_base_bdevs_discovered": 4, 00:13:38.754 "num_base_bdevs_operational": 4, 00:13:38.754 "process": { 00:13:38.754 "type": "rebuild", 00:13:38.754 "target": "spare", 00:13:38.754 "progress": { 00:13:38.754 "blocks": 10240, 00:13:38.754 "percent": 15 00:13:38.754 } 00:13:38.754 }, 00:13:38.754 "base_bdevs_list": [ 00:13:38.754 { 00:13:38.754 "name": "spare", 00:13:38.754 "uuid": "6ab6655f-dbe2-5600-9af3-e6604d630acd", 00:13:38.754 "is_configured": true, 00:13:38.754 "data_offset": 0, 00:13:38.754 "data_size": 65536 00:13:38.754 }, 00:13:38.754 { 00:13:38.754 "name": "BaseBdev2", 00:13:38.754 "uuid": "70148e07-200e-594a-ba32-0292d765a1c7", 00:13:38.754 "is_configured": true, 00:13:38.754 "data_offset": 0, 00:13:38.754 "data_size": 65536 00:13:38.754 }, 00:13:38.754 { 00:13:38.754 "name": "BaseBdev3", 00:13:38.754 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:38.754 "is_configured": true, 00:13:38.754 "data_offset": 0, 00:13:38.754 "data_size": 65536 00:13:38.754 }, 00:13:38.754 { 00:13:38.754 "name": "BaseBdev4", 00:13:38.754 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:38.754 "is_configured": true, 00:13:38.754 "data_offset": 0, 00:13:38.754 "data_size": 65536 00:13:38.754 } 00:13:38.754 ] 00:13:38.754 }' 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.754 [2024-11-16 18:54:22.208469] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:38.754 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.754 [2024-11-16 18:54:22.218489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:39.014 [2024-11-16 18:54:22.319073] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.014 [2024-11-16 18:54:22.426965] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:39.014 [2024-11-16 18:54:22.427058] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:39.014 [2024-11-16 18:54:22.429656] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.014 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.275 "name": "raid_bdev1", 00:13:39.275 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:39.275 "strip_size_kb": 0, 00:13:39.275 "state": "online", 00:13:39.275 "raid_level": "raid1", 00:13:39.275 "superblock": false, 00:13:39.275 "num_base_bdevs": 4, 00:13:39.275 "num_base_bdevs_discovered": 3, 00:13:39.275 "num_base_bdevs_operational": 3, 00:13:39.275 "process": { 00:13:39.275 "type": "rebuild", 00:13:39.275 "target": "spare", 00:13:39.275 "progress": { 00:13:39.275 "blocks": 16384, 00:13:39.275 "percent": 25 00:13:39.275 } 00:13:39.275 }, 00:13:39.275 "base_bdevs_list": [ 00:13:39.275 { 00:13:39.275 "name": "spare", 00:13:39.275 "uuid": "6ab6655f-dbe2-5600-9af3-e6604d630acd", 00:13:39.275 "is_configured": true, 00:13:39.275 "data_offset": 0, 00:13:39.275 "data_size": 65536 00:13:39.275 }, 00:13:39.275 { 00:13:39.275 "name": null, 00:13:39.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.275 "is_configured": false, 00:13:39.275 "data_offset": 0, 00:13:39.275 "data_size": 65536 00:13:39.275 }, 00:13:39.275 { 00:13:39.275 "name": "BaseBdev3", 00:13:39.275 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:39.275 "is_configured": true, 00:13:39.275 "data_offset": 0, 00:13:39.275 "data_size": 65536 00:13:39.275 }, 00:13:39.275 { 00:13:39.275 "name": "BaseBdev4", 00:13:39.275 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:39.275 "is_configured": true, 00:13:39.275 "data_offset": 0, 00:13:39.275 "data_size": 65536 00:13:39.275 } 00:13:39.275 ] 00:13:39.275 }' 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=464 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.275 159.75 IOPS, 479.25 MiB/s [2024-11-16T18:54:22.747Z] 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.275 "name": "raid_bdev1", 00:13:39.275 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:39.275 "strip_size_kb": 0, 00:13:39.275 "state": "online", 00:13:39.275 "raid_level": "raid1", 00:13:39.275 "superblock": false, 00:13:39.275 "num_base_bdevs": 4, 00:13:39.275 "num_base_bdevs_discovered": 3, 00:13:39.275 "num_base_bdevs_operational": 3, 00:13:39.275 "process": { 00:13:39.275 "type": "rebuild", 00:13:39.275 "target": "spare", 00:13:39.275 "progress": { 00:13:39.275 "blocks": 18432, 00:13:39.275 "percent": 28 00:13:39.275 } 00:13:39.275 }, 00:13:39.275 "base_bdevs_list": [ 00:13:39.275 { 00:13:39.275 "name": "spare", 00:13:39.275 "uuid": "6ab6655f-dbe2-5600-9af3-e6604d630acd", 00:13:39.275 "is_configured": true, 00:13:39.275 "data_offset": 0, 00:13:39.275 "data_size": 65536 00:13:39.275 }, 00:13:39.275 { 00:13:39.275 "name": null, 00:13:39.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.275 "is_configured": false, 00:13:39.275 "data_offset": 0, 00:13:39.275 "data_size": 65536 00:13:39.275 }, 00:13:39.275 { 00:13:39.275 "name": "BaseBdev3", 00:13:39.275 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:39.275 "is_configured": true, 00:13:39.275 "data_offset": 0, 00:13:39.275 "data_size": 65536 00:13:39.275 }, 00:13:39.275 { 00:13:39.275 "name": "BaseBdev4", 00:13:39.275 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:39.275 "is_configured": true, 00:13:39.275 "data_offset": 0, 00:13:39.275 "data_size": 65536 00:13:39.275 } 00:13:39.275 ] 00:13:39.275 }' 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.275 [2024-11-16 18:54:22.674948] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:39.275 18:54:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:39.535 [2024-11-16 18:54:22.803116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:40.105 [2024-11-16 18:54:23.459478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:40.365 [2024-11-16 18:54:23.589869] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:40.365 148.20 IOPS, 444.60 MiB/s [2024-11-16T18:54:23.837Z] 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:40.365 "name": "raid_bdev1", 00:13:40.365 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:40.365 "strip_size_kb": 0, 00:13:40.365 "state": "online", 00:13:40.365 "raid_level": "raid1", 00:13:40.365 "superblock": false, 00:13:40.365 "num_base_bdevs": 4, 00:13:40.365 "num_base_bdevs_discovered": 3, 00:13:40.365 "num_base_bdevs_operational": 3, 00:13:40.365 "process": { 00:13:40.365 "type": "rebuild", 00:13:40.365 "target": "spare", 00:13:40.365 "progress": { 00:13:40.365 "blocks": 36864, 00:13:40.365 "percent": 56 00:13:40.365 } 00:13:40.365 }, 00:13:40.365 "base_bdevs_list": [ 00:13:40.365 { 00:13:40.365 "name": "spare", 00:13:40.365 "uuid": "6ab6655f-dbe2-5600-9af3-e6604d630acd", 00:13:40.365 "is_configured": true, 00:13:40.365 "data_offset": 0, 00:13:40.365 "data_size": 65536 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "name": null, 00:13:40.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.365 "is_configured": false, 00:13:40.365 "data_offset": 0, 00:13:40.365 "data_size": 65536 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "name": "BaseBdev3", 00:13:40.365 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:40.365 "is_configured": true, 00:13:40.365 "data_offset": 0, 00:13:40.365 "data_size": 65536 00:13:40.365 }, 00:13:40.365 { 00:13:40.365 "name": "BaseBdev4", 00:13:40.365 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:40.365 "is_configured": true, 00:13:40.365 "data_offset": 0, 00:13:40.365 "data_size": 65536 00:13:40.365 } 00:13:40.365 ] 00:13:40.365 }' 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.365 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:40.625 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.625 18:54:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:40.885 [2024-11-16 18:54:24.124957] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:40.885 [2024-11-16 18:54:24.326746] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:40.885 [2024-11-16 18:54:24.327038] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:41.403 129.00 IOPS, 387.00 MiB/s [2024-11-16T18:54:24.875Z] [2024-11-16 18:54:24.655559] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.664 "name": "raid_bdev1", 00:13:41.664 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:41.664 "strip_size_kb": 0, 00:13:41.664 "state": "online", 00:13:41.664 "raid_level": "raid1", 00:13:41.664 "superblock": false, 00:13:41.664 "num_base_bdevs": 4, 00:13:41.664 "num_base_bdevs_discovered": 3, 00:13:41.664 "num_base_bdevs_operational": 3, 00:13:41.664 "process": { 00:13:41.664 "type": "rebuild", 00:13:41.664 "target": "spare", 00:13:41.664 "progress": { 00:13:41.664 "blocks": 55296, 00:13:41.664 "percent": 84 00:13:41.664 } 00:13:41.664 }, 00:13:41.664 "base_bdevs_list": [ 00:13:41.664 { 00:13:41.664 "name": "spare", 00:13:41.664 "uuid": "6ab6655f-dbe2-5600-9af3-e6604d630acd", 00:13:41.664 "is_configured": true, 00:13:41.664 "data_offset": 0, 00:13:41.664 "data_size": 65536 00:13:41.664 }, 00:13:41.664 { 00:13:41.664 "name": null, 00:13:41.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.664 "is_configured": false, 00:13:41.664 "data_offset": 0, 00:13:41.664 "data_size": 65536 00:13:41.664 }, 00:13:41.664 { 00:13:41.664 "name": "BaseBdev3", 00:13:41.664 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:41.664 "is_configured": true, 00:13:41.664 "data_offset": 0, 00:13:41.664 "data_size": 65536 00:13:41.664 }, 00:13:41.664 { 00:13:41.664 "name": "BaseBdev4", 00:13:41.664 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:41.664 "is_configured": true, 00:13:41.664 "data_offset": 0, 00:13:41.664 "data_size": 65536 00:13:41.664 } 00:13:41.664 ] 00:13:41.664 }' 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.664 [2024-11-16 18:54:24.969331] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:41.664 18:54:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.664 18:54:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:41.664 18:54:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:41.664 [2024-11-16 18:54:25.082491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:42.233 [2024-11-16 18:54:25.506460] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:42.233 115.86 IOPS, 347.57 MiB/s [2024-11-16T18:54:25.705Z] [2024-11-16 18:54:25.606281] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:42.233 [2024-11-16 18:54:25.608457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.802 "name": "raid_bdev1", 00:13:42.802 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:42.802 "strip_size_kb": 0, 00:13:42.802 "state": "online", 00:13:42.802 "raid_level": "raid1", 00:13:42.802 "superblock": false, 00:13:42.802 "num_base_bdevs": 4, 00:13:42.802 "num_base_bdevs_discovered": 3, 00:13:42.802 "num_base_bdevs_operational": 3, 00:13:42.802 "base_bdevs_list": [ 00:13:42.802 { 00:13:42.802 "name": "spare", 00:13:42.802 "uuid": "6ab6655f-dbe2-5600-9af3-e6604d630acd", 00:13:42.802 "is_configured": true, 00:13:42.802 "data_offset": 0, 00:13:42.802 "data_size": 65536 00:13:42.802 }, 00:13:42.802 { 00:13:42.802 "name": null, 00:13:42.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.802 "is_configured": false, 00:13:42.802 "data_offset": 0, 00:13:42.802 "data_size": 65536 00:13:42.802 }, 00:13:42.802 { 00:13:42.802 "name": "BaseBdev3", 00:13:42.802 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:42.802 "is_configured": true, 00:13:42.802 "data_offset": 0, 00:13:42.802 "data_size": 65536 00:13:42.802 }, 00:13:42.802 { 00:13:42.802 "name": "BaseBdev4", 00:13:42.802 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:42.802 "is_configured": true, 00:13:42.802 "data_offset": 0, 00:13:42.802 "data_size": 65536 00:13:42.802 } 00:13:42.802 ] 00:13:42.802 }' 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.802 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.802 "name": "raid_bdev1", 00:13:42.802 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:42.802 "strip_size_kb": 0, 00:13:42.802 "state": "online", 00:13:42.802 "raid_level": "raid1", 00:13:42.802 "superblock": false, 00:13:42.802 "num_base_bdevs": 4, 00:13:42.802 "num_base_bdevs_discovered": 3, 00:13:42.802 "num_base_bdevs_operational": 3, 00:13:42.802 "base_bdevs_list": [ 00:13:42.802 { 00:13:42.802 "name": "spare", 00:13:42.802 "uuid": "6ab6655f-dbe2-5600-9af3-e6604d630acd", 00:13:42.802 "is_configured": true, 00:13:42.802 "data_offset": 0, 00:13:42.802 "data_size": 65536 00:13:42.802 }, 00:13:42.802 { 00:13:42.802 "name": null, 00:13:42.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.802 "is_configured": false, 00:13:42.802 "data_offset": 0, 00:13:42.803 "data_size": 65536 00:13:42.803 }, 00:13:42.803 { 00:13:42.803 "name": "BaseBdev3", 00:13:42.803 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:42.803 "is_configured": true, 00:13:42.803 "data_offset": 0, 00:13:42.803 "data_size": 65536 00:13:42.803 }, 00:13:42.803 { 00:13:42.803 "name": "BaseBdev4", 00:13:42.803 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:42.803 "is_configured": true, 00:13:42.803 "data_offset": 0, 00:13:42.803 "data_size": 65536 00:13:42.803 } 00:13:42.803 ] 00:13:42.803 }' 00:13:42.803 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.803 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.803 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.062 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:43.062 "name": "raid_bdev1", 00:13:43.062 "uuid": "523d690f-0d52-4fdc-b3db-7344a1a332d1", 00:13:43.062 "strip_size_kb": 0, 00:13:43.062 "state": "online", 00:13:43.062 "raid_level": "raid1", 00:13:43.062 "superblock": false, 00:13:43.062 "num_base_bdevs": 4, 00:13:43.062 "num_base_bdevs_discovered": 3, 00:13:43.062 "num_base_bdevs_operational": 3, 00:13:43.062 "base_bdevs_list": [ 00:13:43.062 { 00:13:43.062 "name": "spare", 00:13:43.062 "uuid": "6ab6655f-dbe2-5600-9af3-e6604d630acd", 00:13:43.062 "is_configured": true, 00:13:43.062 "data_offset": 0, 00:13:43.063 "data_size": 65536 00:13:43.063 }, 00:13:43.063 { 00:13:43.063 "name": null, 00:13:43.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:43.063 "is_configured": false, 00:13:43.063 "data_offset": 0, 00:13:43.063 "data_size": 65536 00:13:43.063 }, 00:13:43.063 { 00:13:43.063 "name": "BaseBdev3", 00:13:43.063 "uuid": "5d074f96-9bcc-5fa1-a43b-de614be4ff81", 00:13:43.063 "is_configured": true, 00:13:43.063 "data_offset": 0, 00:13:43.063 "data_size": 65536 00:13:43.063 }, 00:13:43.063 { 00:13:43.063 "name": "BaseBdev4", 00:13:43.063 "uuid": "fb188bc2-66bc-517b-bbcf-97d4f525a1b2", 00:13:43.063 "is_configured": true, 00:13:43.063 "data_offset": 0, 00:13:43.063 "data_size": 65536 00:13:43.063 } 00:13:43.063 ] 00:13:43.063 }' 00:13:43.063 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:43.063 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.321 106.75 IOPS, 320.25 MiB/s [2024-11-16T18:54:26.793Z] 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:43.321 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.321 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.321 [2024-11-16 18:54:26.719812] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:43.321 [2024-11-16 18:54:26.719961] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.581 00:13:43.581 Latency(us) 00:13:43.581 [2024-11-16T18:54:27.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.581 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:43.581 raid_bdev1 : 8.21 104.66 313.99 0.00 0.00 13645.23 313.01 111726.00 00:13:43.581 [2024-11-16T18:54:27.053Z] =================================================================================================================== 00:13:43.581 [2024-11-16T18:54:27.053Z] Total : 104.66 313.99 0.00 0.00 13645.23 313.01 111726.00 00:13:43.581 [2024-11-16 18:54:26.825911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.581 [2024-11-16 18:54:26.826053] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.581 [2024-11-16 18:54:26.826208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.581 [2024-11-16 18:54:26.826257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:43.581 { 00:13:43.581 "results": [ 00:13:43.581 { 00:13:43.581 "job": "raid_bdev1", 00:13:43.581 "core_mask": "0x1", 00:13:43.581 "workload": "randrw", 00:13:43.581 "percentage": 50, 00:13:43.581 "status": "finished", 00:13:43.581 "queue_depth": 2, 00:13:43.581 "io_size": 3145728, 00:13:43.581 "runtime": 8.207294, 00:13:43.581 "iops": 104.66299854738968, 00:13:43.581 "mibps": 313.988995642169, 00:13:43.581 "io_failed": 0, 00:13:43.581 "io_timeout": 0, 00:13:43.581 "avg_latency_us": 13645.234798257341, 00:13:43.581 "min_latency_us": 313.0131004366812, 00:13:43.581 "max_latency_us": 111726.00174672488 00:13:43.581 } 00:13:43.581 ], 00:13:43.581 "core_count": 1 00:13:43.581 } 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.581 18:54:26 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:43.840 /dev/nbd0 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:43.840 1+0 records in 00:13:43.840 1+0 records out 00:13:43.840 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000515332 s, 7.9 MB/s 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:43.840 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:44.099 /dev/nbd1 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.099 1+0 records in 00:13:44.099 1+0 records out 00:13:44.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000539699 s, 7.6 MB/s 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.099 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.358 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:44.616 /dev/nbd1 00:13:44.616 18:54:27 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:44.616 1+0 records in 00:13:44.616 1+0 records out 00:13:44.616 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484109 s, 8.5 MB/s 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:44.616 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:44.617 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:44.617 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:44.617 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:44.875 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:45.135 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78465 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78465 ']' 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78465 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78465 00:13:45.136 killing process with pid 78465 00:13:45.136 Received shutdown signal, test time was about 10.001464 seconds 00:13:45.136 00:13:45.136 Latency(us) 00:13:45.136 [2024-11-16T18:54:28.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:45.136 [2024-11-16T18:54:28.608Z] =================================================================================================================== 00:13:45.136 [2024-11-16T18:54:28.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78465' 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78465 00:13:45.136 [2024-11-16 18:54:28.594273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:45.136 18:54:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78465 00:13:45.703 [2024-11-16 18:54:28.996093] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:46.663 18:54:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:46.663 00:13:46.663 real 0m13.430s 00:13:46.663 user 0m16.860s 00:13:46.663 sys 0m1.925s 00:13:46.664 18:54:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.664 ************************************ 00:13:46.664 END TEST raid_rebuild_test_io 00:13:46.664 ************************************ 00:13:46.664 18:54:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.922 18:54:30 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:13:46.922 18:54:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:46.922 18:54:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.922 18:54:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:46.922 ************************************ 00:13:46.922 START TEST raid_rebuild_test_sb_io 00:13:46.922 ************************************ 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78874 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78874 00:13:46.922 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:46.923 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78874 ']' 00:13:46.923 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:46.923 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:46.923 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:46.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:46.923 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:46.923 18:54:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.923 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:46.923 Zero copy mechanism will not be used. 00:13:46.923 [2024-11-16 18:54:30.284036] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:46.923 [2024-11-16 18:54:30.284149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78874 ] 00:13:47.181 [2024-11-16 18:54:30.460343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.181 [2024-11-16 18:54:30.574911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:47.440 [2024-11-16 18:54:30.775940] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.440 [2024-11-16 18:54:30.776003] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.699 BaseBdev1_malloc 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.699 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.699 [2024-11-16 18:54:31.164642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:47.699 [2024-11-16 18:54:31.164718] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.699 [2024-11-16 18:54:31.164743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:47.699 [2024-11-16 18:54:31.164754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.699 [2024-11-16 18:54:31.166813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.699 [2024-11-16 18:54:31.166931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:47.958 BaseBdev1 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 BaseBdev2_malloc 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 [2024-11-16 18:54:31.218911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:47.958 [2024-11-16 18:54:31.218972] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.958 [2024-11-16 18:54:31.218992] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:47.958 [2024-11-16 18:54:31.219004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.958 [2024-11-16 18:54:31.220971] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.958 [2024-11-16 18:54:31.221078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:47.958 BaseBdev2 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 BaseBdev3_malloc 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 [2024-11-16 18:54:31.284357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:47.958 [2024-11-16 18:54:31.284418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.958 [2024-11-16 18:54:31.284441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:47.958 [2024-11-16 18:54:31.284452] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.958 [2024-11-16 18:54:31.286552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.958 [2024-11-16 18:54:31.286594] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:47.958 BaseBdev3 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 BaseBdev4_malloc 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 [2024-11-16 18:54:31.337222] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:47.958 [2024-11-16 18:54:31.337278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.958 [2024-11-16 18:54:31.337295] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:47.958 [2024-11-16 18:54:31.337305] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.958 [2024-11-16 18:54:31.339241] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.958 [2024-11-16 18:54:31.339334] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:47.958 BaseBdev4 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 spare_malloc 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 spare_delay 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 [2024-11-16 18:54:31.409473] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:47.958 [2024-11-16 18:54:31.409576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.958 [2024-11-16 18:54:31.409640] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:47.958 [2024-11-16 18:54:31.409688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.958 [2024-11-16 18:54:31.411753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.958 [2024-11-16 18:54:31.411823] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:47.958 spare 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.958 [2024-11-16 18:54:31.421498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:47.958 [2024-11-16 18:54:31.423295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:47.958 [2024-11-16 18:54:31.423403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:47.958 [2024-11-16 18:54:31.423477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:47.958 [2024-11-16 18:54:31.423699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:47.958 [2024-11-16 18:54:31.423754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:47.958 [2024-11-16 18:54:31.424034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:47.958 [2024-11-16 18:54:31.424245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:47.958 [2024-11-16 18:54:31.424291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:47.958 [2024-11-16 18:54:31.424490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.958 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.959 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.959 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.959 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.217 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.217 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.217 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.217 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.217 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.217 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.217 "name": "raid_bdev1", 00:13:48.217 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:48.217 "strip_size_kb": 0, 00:13:48.217 "state": "online", 00:13:48.217 "raid_level": "raid1", 00:13:48.217 "superblock": true, 00:13:48.217 "num_base_bdevs": 4, 00:13:48.217 "num_base_bdevs_discovered": 4, 00:13:48.217 "num_base_bdevs_operational": 4, 00:13:48.217 "base_bdevs_list": [ 00:13:48.217 { 00:13:48.217 "name": "BaseBdev1", 00:13:48.217 "uuid": "b088ddc6-a26b-5137-820a-f6b757c4bbd4", 00:13:48.217 "is_configured": true, 00:13:48.217 "data_offset": 2048, 00:13:48.217 "data_size": 63488 00:13:48.217 }, 00:13:48.217 { 00:13:48.217 "name": "BaseBdev2", 00:13:48.217 "uuid": "2c01be46-6fd7-55f3-b6cb-f04f48191dcf", 00:13:48.217 "is_configured": true, 00:13:48.217 "data_offset": 2048, 00:13:48.217 "data_size": 63488 00:13:48.217 }, 00:13:48.217 { 00:13:48.217 "name": "BaseBdev3", 00:13:48.217 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:48.217 "is_configured": true, 00:13:48.217 "data_offset": 2048, 00:13:48.217 "data_size": 63488 00:13:48.217 }, 00:13:48.217 { 00:13:48.217 "name": "BaseBdev4", 00:13:48.217 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:48.217 "is_configured": true, 00:13:48.217 "data_offset": 2048, 00:13:48.217 "data_size": 63488 00:13:48.217 } 00:13:48.217 ] 00:13:48.217 }' 00:13:48.217 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.217 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.475 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.475 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.475 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.475 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:48.475 [2024-11-16 18:54:31.909018] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.475 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.475 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:48.475 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.734 [2024-11-16 18:54:31.988497] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.734 18:54:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.734 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.734 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.734 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.734 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.734 "name": "raid_bdev1", 00:13:48.734 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:48.734 "strip_size_kb": 0, 00:13:48.734 "state": "online", 00:13:48.734 "raid_level": "raid1", 00:13:48.734 "superblock": true, 00:13:48.734 "num_base_bdevs": 4, 00:13:48.734 "num_base_bdevs_discovered": 3, 00:13:48.734 "num_base_bdevs_operational": 3, 00:13:48.734 "base_bdevs_list": [ 00:13:48.734 { 00:13:48.734 "name": null, 00:13:48.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.734 "is_configured": false, 00:13:48.734 "data_offset": 0, 00:13:48.734 "data_size": 63488 00:13:48.734 }, 00:13:48.734 { 00:13:48.734 "name": "BaseBdev2", 00:13:48.734 "uuid": "2c01be46-6fd7-55f3-b6cb-f04f48191dcf", 00:13:48.734 "is_configured": true, 00:13:48.734 "data_offset": 2048, 00:13:48.734 "data_size": 63488 00:13:48.734 }, 00:13:48.734 { 00:13:48.734 "name": "BaseBdev3", 00:13:48.734 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:48.734 "is_configured": true, 00:13:48.734 "data_offset": 2048, 00:13:48.734 "data_size": 63488 00:13:48.735 }, 00:13:48.735 { 00:13:48.735 "name": "BaseBdev4", 00:13:48.735 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:48.735 "is_configured": true, 00:13:48.735 "data_offset": 2048, 00:13:48.735 "data_size": 63488 00:13:48.735 } 00:13:48.735 ] 00:13:48.735 }' 00:13:48.735 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.735 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.735 [2024-11-16 18:54:32.088675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:48.735 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:48.735 Zero copy mechanism will not be used. 00:13:48.735 Running I/O for 60 seconds... 00:13:48.993 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:48.993 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.993 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.993 [2024-11-16 18:54:32.447703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:49.251 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.251 18:54:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:49.251 [2024-11-16 18:54:32.521935] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:13:49.251 [2024-11-16 18:54:32.524292] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.252 [2024-11-16 18:54:32.648151] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:49.252 [2024-11-16 18:54:32.650763] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:49.509 [2024-11-16 18:54:32.888365] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.509 [2024-11-16 18:54:32.889363] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.767 113.00 IOPS, 339.00 MiB/s [2024-11-16T18:54:33.239Z] [2024-11-16 18:54:33.227741] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:50.026 [2024-11-16 18:54:33.444563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:50.026 [2024-11-16 18:54:33.445421] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:50.026 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:50.026 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.026 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:50.026 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:50.285 "name": "raid_bdev1", 00:13:50.285 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:50.285 "strip_size_kb": 0, 00:13:50.285 "state": "online", 00:13:50.285 "raid_level": "raid1", 00:13:50.285 "superblock": true, 00:13:50.285 "num_base_bdevs": 4, 00:13:50.285 "num_base_bdevs_discovered": 4, 00:13:50.285 "num_base_bdevs_operational": 4, 00:13:50.285 "process": { 00:13:50.285 "type": "rebuild", 00:13:50.285 "target": "spare", 00:13:50.285 "progress": { 00:13:50.285 "blocks": 10240, 00:13:50.285 "percent": 16 00:13:50.285 } 00:13:50.285 }, 00:13:50.285 "base_bdevs_list": [ 00:13:50.285 { 00:13:50.285 "name": "spare", 00:13:50.285 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:50.285 "is_configured": true, 00:13:50.285 "data_offset": 2048, 00:13:50.285 "data_size": 63488 00:13:50.285 }, 00:13:50.285 { 00:13:50.285 "name": "BaseBdev2", 00:13:50.285 "uuid": "2c01be46-6fd7-55f3-b6cb-f04f48191dcf", 00:13:50.285 "is_configured": true, 00:13:50.285 "data_offset": 2048, 00:13:50.285 "data_size": 63488 00:13:50.285 }, 00:13:50.285 { 00:13:50.285 "name": "BaseBdev3", 00:13:50.285 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:50.285 "is_configured": true, 00:13:50.285 "data_offset": 2048, 00:13:50.285 "data_size": 63488 00:13:50.285 }, 00:13:50.285 { 00:13:50.285 "name": "BaseBdev4", 00:13:50.285 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:50.285 "is_configured": true, 00:13:50.285 "data_offset": 2048, 00:13:50.285 "data_size": 63488 00:13:50.285 } 00:13:50.285 ] 00:13:50.285 }' 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.285 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.285 [2024-11-16 18:54:33.647130] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.544 [2024-11-16 18:54:33.765606] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:50.544 [2024-11-16 18:54:33.770885] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.544 [2024-11-16 18:54:33.770940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:50.544 [2024-11-16 18:54:33.770955] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:50.544 [2024-11-16 18:54:33.798236] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.544 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.545 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.545 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.545 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.545 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.545 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.545 "name": "raid_bdev1", 00:13:50.545 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:50.545 "strip_size_kb": 0, 00:13:50.545 "state": "online", 00:13:50.545 "raid_level": "raid1", 00:13:50.545 "superblock": true, 00:13:50.545 "num_base_bdevs": 4, 00:13:50.545 "num_base_bdevs_discovered": 3, 00:13:50.545 "num_base_bdevs_operational": 3, 00:13:50.545 "base_bdevs_list": [ 00:13:50.545 { 00:13:50.545 "name": null, 00:13:50.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.545 "is_configured": false, 00:13:50.545 "data_offset": 0, 00:13:50.545 "data_size": 63488 00:13:50.545 }, 00:13:50.545 { 00:13:50.545 "name": "BaseBdev2", 00:13:50.545 "uuid": "2c01be46-6fd7-55f3-b6cb-f04f48191dcf", 00:13:50.545 "is_configured": true, 00:13:50.545 "data_offset": 2048, 00:13:50.545 "data_size": 63488 00:13:50.545 }, 00:13:50.545 { 00:13:50.545 "name": "BaseBdev3", 00:13:50.545 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:50.545 "is_configured": true, 00:13:50.545 "data_offset": 2048, 00:13:50.545 "data_size": 63488 00:13:50.545 }, 00:13:50.545 { 00:13:50.545 "name": "BaseBdev4", 00:13:50.545 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:50.545 "is_configured": true, 00:13:50.545 "data_offset": 2048, 00:13:50.545 "data_size": 63488 00:13:50.545 } 00:13:50.545 ] 00:13:50.545 }' 00:13:50.545 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.545 18:54:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.803 106.00 IOPS, 318.00 MiB/s [2024-11-16T18:54:34.275Z] 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:50.803 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:50.803 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:50.803 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:50.803 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:50.803 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.803 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.803 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:50.803 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.803 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.062 "name": "raid_bdev1", 00:13:51.062 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:51.062 "strip_size_kb": 0, 00:13:51.062 "state": "online", 00:13:51.062 "raid_level": "raid1", 00:13:51.062 "superblock": true, 00:13:51.062 "num_base_bdevs": 4, 00:13:51.062 "num_base_bdevs_discovered": 3, 00:13:51.062 "num_base_bdevs_operational": 3, 00:13:51.062 "base_bdevs_list": [ 00:13:51.062 { 00:13:51.062 "name": null, 00:13:51.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.062 "is_configured": false, 00:13:51.062 "data_offset": 0, 00:13:51.062 "data_size": 63488 00:13:51.062 }, 00:13:51.062 { 00:13:51.062 "name": "BaseBdev2", 00:13:51.062 "uuid": "2c01be46-6fd7-55f3-b6cb-f04f48191dcf", 00:13:51.062 "is_configured": true, 00:13:51.062 "data_offset": 2048, 00:13:51.062 "data_size": 63488 00:13:51.062 }, 00:13:51.062 { 00:13:51.062 "name": "BaseBdev3", 00:13:51.062 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:51.062 "is_configured": true, 00:13:51.062 "data_offset": 2048, 00:13:51.062 "data_size": 63488 00:13:51.062 }, 00:13:51.062 { 00:13:51.062 "name": "BaseBdev4", 00:13:51.062 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:51.062 "is_configured": true, 00:13:51.062 "data_offset": 2048, 00:13:51.062 "data_size": 63488 00:13:51.062 } 00:13:51.062 ] 00:13:51.062 }' 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.062 [2024-11-16 18:54:34.385738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.062 18:54:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:51.062 [2024-11-16 18:54:34.454024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:51.062 [2024-11-16 18:54:34.456306] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:51.321 [2024-11-16 18:54:34.568877] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:51.321 [2024-11-16 18:54:34.571390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:51.321 [2024-11-16 18:54:34.781280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:51.321 [2024-11-16 18:54:34.781924] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:51.580 [2024-11-16 18:54:35.025437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:51.580 [2024-11-16 18:54:35.026516] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:51.838 127.00 IOPS, 381.00 MiB/s [2024-11-16T18:54:35.310Z] [2024-11-16 18:54:35.146314] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:51.838 [2024-11-16 18:54:35.146925] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:52.097 [2024-11-16 18:54:35.379574] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:52.097 [2024-11-16 18:54:35.380665] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.097 "name": "raid_bdev1", 00:13:52.097 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:52.097 "strip_size_kb": 0, 00:13:52.097 "state": "online", 00:13:52.097 "raid_level": "raid1", 00:13:52.097 "superblock": true, 00:13:52.097 "num_base_bdevs": 4, 00:13:52.097 "num_base_bdevs_discovered": 4, 00:13:52.097 "num_base_bdevs_operational": 4, 00:13:52.097 "process": { 00:13:52.097 "type": "rebuild", 00:13:52.097 "target": "spare", 00:13:52.097 "progress": { 00:13:52.097 "blocks": 14336, 00:13:52.097 "percent": 22 00:13:52.097 } 00:13:52.097 }, 00:13:52.097 "base_bdevs_list": [ 00:13:52.097 { 00:13:52.097 "name": "spare", 00:13:52.097 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:52.097 "is_configured": true, 00:13:52.097 "data_offset": 2048, 00:13:52.097 "data_size": 63488 00:13:52.097 }, 00:13:52.097 { 00:13:52.097 "name": "BaseBdev2", 00:13:52.097 "uuid": "2c01be46-6fd7-55f3-b6cb-f04f48191dcf", 00:13:52.097 "is_configured": true, 00:13:52.097 "data_offset": 2048, 00:13:52.097 "data_size": 63488 00:13:52.097 }, 00:13:52.097 { 00:13:52.097 "name": "BaseBdev3", 00:13:52.097 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:52.097 "is_configured": true, 00:13:52.097 "data_offset": 2048, 00:13:52.097 "data_size": 63488 00:13:52.097 }, 00:13:52.097 { 00:13:52.097 "name": "BaseBdev4", 00:13:52.097 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:52.097 "is_configured": true, 00:13:52.097 "data_offset": 2048, 00:13:52.097 "data_size": 63488 00:13:52.097 } 00:13:52.097 ] 00:13:52.097 }' 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.097 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:52.361 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.361 [2024-11-16 18:54:35.589950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:52.361 [2024-11-16 18:54:35.813761] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:13:52.361 [2024-11-16 18:54:35.813919] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.361 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.362 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.362 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.635 "name": "raid_bdev1", 00:13:52.635 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:52.635 "strip_size_kb": 0, 00:13:52.635 "state": "online", 00:13:52.635 "raid_level": "raid1", 00:13:52.635 "superblock": true, 00:13:52.635 "num_base_bdevs": 4, 00:13:52.635 "num_base_bdevs_discovered": 3, 00:13:52.635 "num_base_bdevs_operational": 3, 00:13:52.635 "process": { 00:13:52.635 "type": "rebuild", 00:13:52.635 "target": "spare", 00:13:52.635 "progress": { 00:13:52.635 "blocks": 16384, 00:13:52.635 "percent": 25 00:13:52.635 } 00:13:52.635 }, 00:13:52.635 "base_bdevs_list": [ 00:13:52.635 { 00:13:52.635 "name": "spare", 00:13:52.635 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:52.635 "is_configured": true, 00:13:52.635 "data_offset": 2048, 00:13:52.635 "data_size": 63488 00:13:52.635 }, 00:13:52.635 { 00:13:52.635 "name": null, 00:13:52.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.635 "is_configured": false, 00:13:52.635 "data_offset": 0, 00:13:52.635 "data_size": 63488 00:13:52.635 }, 00:13:52.635 { 00:13:52.635 "name": "BaseBdev3", 00:13:52.635 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:52.635 "is_configured": true, 00:13:52.635 "data_offset": 2048, 00:13:52.635 "data_size": 63488 00:13:52.635 }, 00:13:52.635 { 00:13:52.635 "name": "BaseBdev4", 00:13:52.635 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:52.635 "is_configured": true, 00:13:52.635 "data_offset": 2048, 00:13:52.635 "data_size": 63488 00:13:52.635 } 00:13:52.635 ] 00:13:52.635 }' 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=477 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.635 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.636 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.636 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.636 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.636 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.636 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.636 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.636 18:54:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.636 18:54:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.636 18:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.636 "name": "raid_bdev1", 00:13:52.636 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:52.636 "strip_size_kb": 0, 00:13:52.636 "state": "online", 00:13:52.636 "raid_level": "raid1", 00:13:52.636 "superblock": true, 00:13:52.636 "num_base_bdevs": 4, 00:13:52.636 "num_base_bdevs_discovered": 3, 00:13:52.636 "num_base_bdevs_operational": 3, 00:13:52.636 "process": { 00:13:52.636 "type": "rebuild", 00:13:52.636 "target": "spare", 00:13:52.636 "progress": { 00:13:52.636 "blocks": 18432, 00:13:52.636 "percent": 29 00:13:52.636 } 00:13:52.636 }, 00:13:52.636 "base_bdevs_list": [ 00:13:52.636 { 00:13:52.636 "name": "spare", 00:13:52.636 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:52.636 "is_configured": true, 00:13:52.636 "data_offset": 2048, 00:13:52.636 "data_size": 63488 00:13:52.636 }, 00:13:52.636 { 00:13:52.636 "name": null, 00:13:52.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.636 "is_configured": false, 00:13:52.636 "data_offset": 0, 00:13:52.636 "data_size": 63488 00:13:52.636 }, 00:13:52.636 { 00:13:52.636 "name": "BaseBdev3", 00:13:52.636 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:52.636 "is_configured": true, 00:13:52.636 "data_offset": 2048, 00:13:52.636 "data_size": 63488 00:13:52.636 }, 00:13:52.636 { 00:13:52.636 "name": "BaseBdev4", 00:13:52.636 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:52.636 "is_configured": true, 00:13:52.636 "data_offset": 2048, 00:13:52.636 "data_size": 63488 00:13:52.636 } 00:13:52.636 ] 00:13:52.636 }' 00:13:52.636 18:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.636 18:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.636 18:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.895 111.00 IOPS, 333.00 MiB/s [2024-11-16T18:54:36.367Z] 18:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.895 18:54:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:53.155 [2024-11-16 18:54:36.541629] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:53.724 98.40 IOPS, 295.20 MiB/s [2024-11-16T18:54:37.196Z] [2024-11-16 18:54:37.098418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:53.724 [2024-11-16 18:54:37.098964] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.724 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.724 "name": "raid_bdev1", 00:13:53.724 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:53.724 "strip_size_kb": 0, 00:13:53.724 "state": "online", 00:13:53.724 "raid_level": "raid1", 00:13:53.725 "superblock": true, 00:13:53.725 "num_base_bdevs": 4, 00:13:53.725 "num_base_bdevs_discovered": 3, 00:13:53.725 "num_base_bdevs_operational": 3, 00:13:53.725 "process": { 00:13:53.725 "type": "rebuild", 00:13:53.725 "target": "spare", 00:13:53.725 "progress": { 00:13:53.725 "blocks": 38912, 00:13:53.725 "percent": 61 00:13:53.725 } 00:13:53.725 }, 00:13:53.725 "base_bdevs_list": [ 00:13:53.725 { 00:13:53.725 "name": "spare", 00:13:53.725 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:53.725 "is_configured": true, 00:13:53.725 "data_offset": 2048, 00:13:53.725 "data_size": 63488 00:13:53.725 }, 00:13:53.725 { 00:13:53.725 "name": null, 00:13:53.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.725 "is_configured": false, 00:13:53.725 "data_offset": 0, 00:13:53.725 "data_size": 63488 00:13:53.725 }, 00:13:53.725 { 00:13:53.725 "name": "BaseBdev3", 00:13:53.725 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:53.725 "is_configured": true, 00:13:53.725 "data_offset": 2048, 00:13:53.725 "data_size": 63488 00:13:53.725 }, 00:13:53.725 { 00:13:53.725 "name": "BaseBdev4", 00:13:53.725 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:53.725 "is_configured": true, 00:13:53.725 "data_offset": 2048, 00:13:53.725 "data_size": 63488 00:13:53.725 } 00:13:53.725 ] 00:13:53.725 }' 00:13:53.725 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.984 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.984 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.984 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.984 18:54:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.242 [2024-11-16 18:54:37.517058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:54.242 [2024-11-16 18:54:37.619328] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:54.501 [2024-11-16 18:54:37.822789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:54.760 90.33 IOPS, 271.00 MiB/s [2024-11-16T18:54:38.232Z] [2024-11-16 18:54:38.132022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:54.760 [2024-11-16 18:54:38.132599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:55.018 [2024-11-16 18:54:38.248807] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.018 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.019 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.019 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:55.019 "name": "raid_bdev1", 00:13:55.019 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:55.019 "strip_size_kb": 0, 00:13:55.019 "state": "online", 00:13:55.019 "raid_level": "raid1", 00:13:55.019 "superblock": true, 00:13:55.019 "num_base_bdevs": 4, 00:13:55.019 "num_base_bdevs_discovered": 3, 00:13:55.019 "num_base_bdevs_operational": 3, 00:13:55.019 "process": { 00:13:55.019 "type": "rebuild", 00:13:55.019 "target": "spare", 00:13:55.019 "progress": { 00:13:55.019 "blocks": 59392, 00:13:55.019 "percent": 93 00:13:55.019 } 00:13:55.019 }, 00:13:55.019 "base_bdevs_list": [ 00:13:55.019 { 00:13:55.019 "name": "spare", 00:13:55.019 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:55.019 "is_configured": true, 00:13:55.019 "data_offset": 2048, 00:13:55.019 "data_size": 63488 00:13:55.019 }, 00:13:55.019 { 00:13:55.019 "name": null, 00:13:55.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.019 "is_configured": false, 00:13:55.019 "data_offset": 0, 00:13:55.019 "data_size": 63488 00:13:55.019 }, 00:13:55.019 { 00:13:55.019 "name": "BaseBdev3", 00:13:55.019 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:55.019 "is_configured": true, 00:13:55.019 "data_offset": 2048, 00:13:55.019 "data_size": 63488 00:13:55.019 }, 00:13:55.019 { 00:13:55.019 "name": "BaseBdev4", 00:13:55.019 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:55.019 "is_configured": true, 00:13:55.019 "data_offset": 2048, 00:13:55.019 "data_size": 63488 00:13:55.019 } 00:13:55.019 ] 00:13:55.019 }' 00:13:55.019 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:55.019 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:55.019 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:55.019 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:55.019 18:54:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:55.019 [2024-11-16 18:54:38.472897] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:55.278 [2024-11-16 18:54:38.579097] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:55.278 [2024-11-16 18:54:38.583190] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.104 81.86 IOPS, 245.57 MiB/s [2024-11-16T18:54:39.576Z] 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:56.104 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:56.104 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.104 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.105 "name": "raid_bdev1", 00:13:56.105 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:56.105 "strip_size_kb": 0, 00:13:56.105 "state": "online", 00:13:56.105 "raid_level": "raid1", 00:13:56.105 "superblock": true, 00:13:56.105 "num_base_bdevs": 4, 00:13:56.105 "num_base_bdevs_discovered": 3, 00:13:56.105 "num_base_bdevs_operational": 3, 00:13:56.105 "base_bdevs_list": [ 00:13:56.105 { 00:13:56.105 "name": "spare", 00:13:56.105 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:56.105 "is_configured": true, 00:13:56.105 "data_offset": 2048, 00:13:56.105 "data_size": 63488 00:13:56.105 }, 00:13:56.105 { 00:13:56.105 "name": null, 00:13:56.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.105 "is_configured": false, 00:13:56.105 "data_offset": 0, 00:13:56.105 "data_size": 63488 00:13:56.105 }, 00:13:56.105 { 00:13:56.105 "name": "BaseBdev3", 00:13:56.105 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:56.105 "is_configured": true, 00:13:56.105 "data_offset": 2048, 00:13:56.105 "data_size": 63488 00:13:56.105 }, 00:13:56.105 { 00:13:56.105 "name": "BaseBdev4", 00:13:56.105 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:56.105 "is_configured": true, 00:13:56.105 "data_offset": 2048, 00:13:56.105 "data_size": 63488 00:13:56.105 } 00:13:56.105 ] 00:13:56.105 }' 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.105 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.364 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.364 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:56.364 "name": "raid_bdev1", 00:13:56.364 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:56.364 "strip_size_kb": 0, 00:13:56.364 "state": "online", 00:13:56.364 "raid_level": "raid1", 00:13:56.364 "superblock": true, 00:13:56.364 "num_base_bdevs": 4, 00:13:56.364 "num_base_bdevs_discovered": 3, 00:13:56.364 "num_base_bdevs_operational": 3, 00:13:56.364 "base_bdevs_list": [ 00:13:56.364 { 00:13:56.364 "name": "spare", 00:13:56.364 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:56.364 "is_configured": true, 00:13:56.364 "data_offset": 2048, 00:13:56.364 "data_size": 63488 00:13:56.364 }, 00:13:56.364 { 00:13:56.364 "name": null, 00:13:56.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.365 "is_configured": false, 00:13:56.365 "data_offset": 0, 00:13:56.365 "data_size": 63488 00:13:56.365 }, 00:13:56.365 { 00:13:56.365 "name": "BaseBdev3", 00:13:56.365 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:56.365 "is_configured": true, 00:13:56.365 "data_offset": 2048, 00:13:56.365 "data_size": 63488 00:13:56.365 }, 00:13:56.365 { 00:13:56.365 "name": "BaseBdev4", 00:13:56.365 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:56.365 "is_configured": true, 00:13:56.365 "data_offset": 2048, 00:13:56.365 "data_size": 63488 00:13:56.365 } 00:13:56.365 ] 00:13:56.365 }' 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:56.365 "name": "raid_bdev1", 00:13:56.365 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:56.365 "strip_size_kb": 0, 00:13:56.365 "state": "online", 00:13:56.365 "raid_level": "raid1", 00:13:56.365 "superblock": true, 00:13:56.365 "num_base_bdevs": 4, 00:13:56.365 "num_base_bdevs_discovered": 3, 00:13:56.365 "num_base_bdevs_operational": 3, 00:13:56.365 "base_bdevs_list": [ 00:13:56.365 { 00:13:56.365 "name": "spare", 00:13:56.365 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:56.365 "is_configured": true, 00:13:56.365 "data_offset": 2048, 00:13:56.365 "data_size": 63488 00:13:56.365 }, 00:13:56.365 { 00:13:56.365 "name": null, 00:13:56.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:56.365 "is_configured": false, 00:13:56.365 "data_offset": 0, 00:13:56.365 "data_size": 63488 00:13:56.365 }, 00:13:56.365 { 00:13:56.365 "name": "BaseBdev3", 00:13:56.365 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:56.365 "is_configured": true, 00:13:56.365 "data_offset": 2048, 00:13:56.365 "data_size": 63488 00:13:56.365 }, 00:13:56.365 { 00:13:56.365 "name": "BaseBdev4", 00:13:56.365 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:56.365 "is_configured": true, 00:13:56.365 "data_offset": 2048, 00:13:56.365 "data_size": 63488 00:13:56.365 } 00:13:56.365 ] 00:13:56.365 }' 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:56.365 18:54:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.887 76.25 IOPS, 228.75 MiB/s [2024-11-16T18:54:40.359Z] 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.887 [2024-11-16 18:54:40.132985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:56.887 [2024-11-16 18:54:40.133071] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:56.887 00:13:56.887 Latency(us) 00:13:56.887 [2024-11-16T18:54:40.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.887 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:56.887 raid_bdev1 : 8.12 75.48 226.43 0.00 0.00 18256.22 316.59 116762.83 00:13:56.887 [2024-11-16T18:54:40.359Z] =================================================================================================================== 00:13:56.887 [2024-11-16T18:54:40.359Z] Total : 75.48 226.43 0.00 0.00 18256.22 316.59 116762.83 00:13:56.887 [2024-11-16 18:54:40.216004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.887 [2024-11-16 18:54:40.216085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:56.887 [2024-11-16 18:54:40.216211] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:56.887 [2024-11-16 18:54:40.216275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:56.887 { 00:13:56.887 "results": [ 00:13:56.887 { 00:13:56.887 "job": "raid_bdev1", 00:13:56.887 "core_mask": "0x1", 00:13:56.887 "workload": "randrw", 00:13:56.887 "percentage": 50, 00:13:56.887 "status": "finished", 00:13:56.887 "queue_depth": 2, 00:13:56.887 "io_size": 3145728, 00:13:56.887 "runtime": 8.121652, 00:13:56.887 "iops": 75.47725512001746, 00:13:56.887 "mibps": 226.4317653600524, 00:13:56.887 "io_failed": 0, 00:13:56.887 "io_timeout": 0, 00:13:56.887 "avg_latency_us": 18256.21516915164, 00:13:56.887 "min_latency_us": 316.5903930131004, 00:13:56.887 "max_latency_us": 116762.82969432314 00:13:56.887 } 00:13:56.887 ], 00:13:56.887 "core_count": 1 00:13:56.887 } 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:56.887 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:57.146 /dev/nbd0 00:13:57.146 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:57.146 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.147 1+0 records in 00:13:57.147 1+0 records out 00:13:57.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551876 s, 7.4 MB/s 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.147 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:13:57.406 /dev/nbd1 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.406 1+0 records in 00:13:57.406 1+0 records out 00:13:57.406 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265745 s, 15.4 MB/s 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.406 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:57.665 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:57.665 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.665 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:57.665 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:57.665 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:57.665 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:57.665 18:54:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.665 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:13:57.924 /dev/nbd1 00:13:57.924 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:57.924 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:57.924 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:57.924 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:57.924 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:57.924 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:57.925 1+0 records in 00:13:57.925 1+0 records out 00:13:57.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207042 s, 19.8 MB/s 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:57.925 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:58.185 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.448 [2024-11-16 18:54:41.861026] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:58.448 [2024-11-16 18:54:41.861095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.448 [2024-11-16 18:54:41.861119] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:58.448 [2024-11-16 18:54:41.861132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.448 [2024-11-16 18:54:41.863414] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.448 [2024-11-16 18:54:41.863457] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:58.448 [2024-11-16 18:54:41.863552] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:58.448 [2024-11-16 18:54:41.863606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.448 [2024-11-16 18:54:41.863775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.448 [2024-11-16 18:54:41.863879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:58.448 spare 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.448 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.707 [2024-11-16 18:54:41.963800] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:58.707 [2024-11-16 18:54:41.963840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:58.707 [2024-11-16 18:54:41.964157] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:13:58.707 [2024-11-16 18:54:41.964332] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:58.707 [2024-11-16 18:54:41.964341] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:58.707 [2024-11-16 18:54:41.964531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.707 18:54:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.707 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.707 "name": "raid_bdev1", 00:13:58.707 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:58.707 "strip_size_kb": 0, 00:13:58.707 "state": "online", 00:13:58.707 "raid_level": "raid1", 00:13:58.707 "superblock": true, 00:13:58.707 "num_base_bdevs": 4, 00:13:58.707 "num_base_bdevs_discovered": 3, 00:13:58.707 "num_base_bdevs_operational": 3, 00:13:58.707 "base_bdevs_list": [ 00:13:58.707 { 00:13:58.707 "name": "spare", 00:13:58.707 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:58.707 "is_configured": true, 00:13:58.707 "data_offset": 2048, 00:13:58.707 "data_size": 63488 00:13:58.707 }, 00:13:58.707 { 00:13:58.707 "name": null, 00:13:58.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.707 "is_configured": false, 00:13:58.707 "data_offset": 2048, 00:13:58.707 "data_size": 63488 00:13:58.707 }, 00:13:58.707 { 00:13:58.707 "name": "BaseBdev3", 00:13:58.707 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:58.707 "is_configured": true, 00:13:58.707 "data_offset": 2048, 00:13:58.707 "data_size": 63488 00:13:58.707 }, 00:13:58.707 { 00:13:58.707 "name": "BaseBdev4", 00:13:58.707 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:58.707 "is_configured": true, 00:13:58.707 "data_offset": 2048, 00:13:58.707 "data_size": 63488 00:13:58.707 } 00:13:58.707 ] 00:13:58.707 }' 00:13:58.707 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.707 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:58.966 "name": "raid_bdev1", 00:13:58.966 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:58.966 "strip_size_kb": 0, 00:13:58.966 "state": "online", 00:13:58.966 "raid_level": "raid1", 00:13:58.966 "superblock": true, 00:13:58.966 "num_base_bdevs": 4, 00:13:58.966 "num_base_bdevs_discovered": 3, 00:13:58.966 "num_base_bdevs_operational": 3, 00:13:58.966 "base_bdevs_list": [ 00:13:58.966 { 00:13:58.966 "name": "spare", 00:13:58.966 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:13:58.966 "is_configured": true, 00:13:58.966 "data_offset": 2048, 00:13:58.966 "data_size": 63488 00:13:58.966 }, 00:13:58.966 { 00:13:58.966 "name": null, 00:13:58.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.966 "is_configured": false, 00:13:58.966 "data_offset": 2048, 00:13:58.966 "data_size": 63488 00:13:58.966 }, 00:13:58.966 { 00:13:58.966 "name": "BaseBdev3", 00:13:58.966 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:58.966 "is_configured": true, 00:13:58.966 "data_offset": 2048, 00:13:58.966 "data_size": 63488 00:13:58.966 }, 00:13:58.966 { 00:13:58.966 "name": "BaseBdev4", 00:13:58.966 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:58.966 "is_configured": true, 00:13:58.966 "data_offset": 2048, 00:13:58.966 "data_size": 63488 00:13:58.966 } 00:13:58.966 ] 00:13:58.966 }' 00:13:58.966 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.225 [2024-11-16 18:54:42.556051] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.225 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.226 "name": "raid_bdev1", 00:13:59.226 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:13:59.226 "strip_size_kb": 0, 00:13:59.226 "state": "online", 00:13:59.226 "raid_level": "raid1", 00:13:59.226 "superblock": true, 00:13:59.226 "num_base_bdevs": 4, 00:13:59.226 "num_base_bdevs_discovered": 2, 00:13:59.226 "num_base_bdevs_operational": 2, 00:13:59.226 "base_bdevs_list": [ 00:13:59.226 { 00:13:59.226 "name": null, 00:13:59.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.226 "is_configured": false, 00:13:59.226 "data_offset": 0, 00:13:59.226 "data_size": 63488 00:13:59.226 }, 00:13:59.226 { 00:13:59.226 "name": null, 00:13:59.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.226 "is_configured": false, 00:13:59.226 "data_offset": 2048, 00:13:59.226 "data_size": 63488 00:13:59.226 }, 00:13:59.226 { 00:13:59.226 "name": "BaseBdev3", 00:13:59.226 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:13:59.226 "is_configured": true, 00:13:59.226 "data_offset": 2048, 00:13:59.226 "data_size": 63488 00:13:59.226 }, 00:13:59.226 { 00:13:59.226 "name": "BaseBdev4", 00:13:59.226 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:13:59.226 "is_configured": true, 00:13:59.226 "data_offset": 2048, 00:13:59.226 "data_size": 63488 00:13:59.226 } 00:13:59.226 ] 00:13:59.226 }' 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.226 18:54:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.795 18:54:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:59.795 18:54:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.795 18:54:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.795 [2024-11-16 18:54:43.011553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.795 [2024-11-16 18:54:43.011897] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:13:59.795 [2024-11-16 18:54:43.011989] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:59.795 [2024-11-16 18:54:43.012062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:59.795 [2024-11-16 18:54:43.026907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:13:59.795 18:54:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.795 18:54:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:59.795 [2024-11-16 18:54:43.029010] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:00.731 "name": "raid_bdev1", 00:14:00.731 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:14:00.731 "strip_size_kb": 0, 00:14:00.731 "state": "online", 00:14:00.731 "raid_level": "raid1", 00:14:00.731 "superblock": true, 00:14:00.731 "num_base_bdevs": 4, 00:14:00.731 "num_base_bdevs_discovered": 3, 00:14:00.731 "num_base_bdevs_operational": 3, 00:14:00.731 "process": { 00:14:00.731 "type": "rebuild", 00:14:00.731 "target": "spare", 00:14:00.731 "progress": { 00:14:00.731 "blocks": 20480, 00:14:00.731 "percent": 32 00:14:00.731 } 00:14:00.731 }, 00:14:00.731 "base_bdevs_list": [ 00:14:00.731 { 00:14:00.731 "name": "spare", 00:14:00.731 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:14:00.731 "is_configured": true, 00:14:00.731 "data_offset": 2048, 00:14:00.731 "data_size": 63488 00:14:00.731 }, 00:14:00.731 { 00:14:00.731 "name": null, 00:14:00.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.731 "is_configured": false, 00:14:00.731 "data_offset": 2048, 00:14:00.731 "data_size": 63488 00:14:00.731 }, 00:14:00.731 { 00:14:00.731 "name": "BaseBdev3", 00:14:00.731 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:14:00.731 "is_configured": true, 00:14:00.731 "data_offset": 2048, 00:14:00.731 "data_size": 63488 00:14:00.731 }, 00:14:00.731 { 00:14:00.731 "name": "BaseBdev4", 00:14:00.731 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:14:00.731 "is_configured": true, 00:14:00.731 "data_offset": 2048, 00:14:00.731 "data_size": 63488 00:14:00.731 } 00:14:00.731 ] 00:14:00.731 }' 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.731 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.731 [2024-11-16 18:54:44.164713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.991 [2024-11-16 18:54:44.234134] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:00.991 [2024-11-16 18:54:44.234217] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.991 [2024-11-16 18:54:44.234234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:00.991 [2024-11-16 18:54:44.234243] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.991 "name": "raid_bdev1", 00:14:00.991 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:14:00.991 "strip_size_kb": 0, 00:14:00.991 "state": "online", 00:14:00.991 "raid_level": "raid1", 00:14:00.991 "superblock": true, 00:14:00.991 "num_base_bdevs": 4, 00:14:00.991 "num_base_bdevs_discovered": 2, 00:14:00.991 "num_base_bdevs_operational": 2, 00:14:00.991 "base_bdevs_list": [ 00:14:00.991 { 00:14:00.991 "name": null, 00:14:00.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.991 "is_configured": false, 00:14:00.991 "data_offset": 0, 00:14:00.991 "data_size": 63488 00:14:00.991 }, 00:14:00.991 { 00:14:00.991 "name": null, 00:14:00.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:00.991 "is_configured": false, 00:14:00.991 "data_offset": 2048, 00:14:00.991 "data_size": 63488 00:14:00.991 }, 00:14:00.991 { 00:14:00.991 "name": "BaseBdev3", 00:14:00.991 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:14:00.991 "is_configured": true, 00:14:00.991 "data_offset": 2048, 00:14:00.991 "data_size": 63488 00:14:00.991 }, 00:14:00.991 { 00:14:00.991 "name": "BaseBdev4", 00:14:00.991 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:14:00.991 "is_configured": true, 00:14:00.991 "data_offset": 2048, 00:14:00.991 "data_size": 63488 00:14:00.991 } 00:14:00.991 ] 00:14:00.991 }' 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.991 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.559 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:01.559 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.559 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.559 [2024-11-16 18:54:44.729797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:01.559 [2024-11-16 18:54:44.729864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.559 [2024-11-16 18:54:44.729892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:01.559 [2024-11-16 18:54:44.729904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.559 [2024-11-16 18:54:44.730402] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.559 [2024-11-16 18:54:44.730425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:01.559 [2024-11-16 18:54:44.730529] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:01.559 [2024-11-16 18:54:44.730546] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:01.559 [2024-11-16 18:54:44.730556] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:01.559 [2024-11-16 18:54:44.730587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:01.559 [2024-11-16 18:54:44.746571] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:01.559 spare 00:14:01.559 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.559 18:54:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:01.559 [2024-11-16 18:54:44.748488] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.495 "name": "raid_bdev1", 00:14:02.495 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:14:02.495 "strip_size_kb": 0, 00:14:02.495 "state": "online", 00:14:02.495 "raid_level": "raid1", 00:14:02.495 "superblock": true, 00:14:02.495 "num_base_bdevs": 4, 00:14:02.495 "num_base_bdevs_discovered": 3, 00:14:02.495 "num_base_bdevs_operational": 3, 00:14:02.495 "process": { 00:14:02.495 "type": "rebuild", 00:14:02.495 "target": "spare", 00:14:02.495 "progress": { 00:14:02.495 "blocks": 20480, 00:14:02.495 "percent": 32 00:14:02.495 } 00:14:02.495 }, 00:14:02.495 "base_bdevs_list": [ 00:14:02.495 { 00:14:02.495 "name": "spare", 00:14:02.495 "uuid": "fd4e4513-ddb6-51ac-95d7-0bec0013f511", 00:14:02.495 "is_configured": true, 00:14:02.495 "data_offset": 2048, 00:14:02.495 "data_size": 63488 00:14:02.495 }, 00:14:02.495 { 00:14:02.495 "name": null, 00:14:02.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.495 "is_configured": false, 00:14:02.495 "data_offset": 2048, 00:14:02.495 "data_size": 63488 00:14:02.495 }, 00:14:02.495 { 00:14:02.495 "name": "BaseBdev3", 00:14:02.495 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:14:02.495 "is_configured": true, 00:14:02.495 "data_offset": 2048, 00:14:02.495 "data_size": 63488 00:14:02.495 }, 00:14:02.495 { 00:14:02.495 "name": "BaseBdev4", 00:14:02.495 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:14:02.495 "is_configured": true, 00:14:02.495 "data_offset": 2048, 00:14:02.495 "data_size": 63488 00:14:02.495 } 00:14:02.495 ] 00:14:02.495 }' 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.495 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.495 [2024-11-16 18:54:45.872141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.495 [2024-11-16 18:54:45.953636] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:02.495 [2024-11-16 18:54:45.953716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.495 [2024-11-16 18:54:45.953736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:02.495 [2024-11-16 18:54:45.953744] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.756 18:54:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.756 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.756 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.756 "name": "raid_bdev1", 00:14:02.756 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:14:02.756 "strip_size_kb": 0, 00:14:02.756 "state": "online", 00:14:02.756 "raid_level": "raid1", 00:14:02.756 "superblock": true, 00:14:02.756 "num_base_bdevs": 4, 00:14:02.756 "num_base_bdevs_discovered": 2, 00:14:02.756 "num_base_bdevs_operational": 2, 00:14:02.756 "base_bdevs_list": [ 00:14:02.756 { 00:14:02.756 "name": null, 00:14:02.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.756 "is_configured": false, 00:14:02.756 "data_offset": 0, 00:14:02.756 "data_size": 63488 00:14:02.756 }, 00:14:02.756 { 00:14:02.756 "name": null, 00:14:02.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.756 "is_configured": false, 00:14:02.756 "data_offset": 2048, 00:14:02.756 "data_size": 63488 00:14:02.756 }, 00:14:02.756 { 00:14:02.756 "name": "BaseBdev3", 00:14:02.756 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:14:02.756 "is_configured": true, 00:14:02.756 "data_offset": 2048, 00:14:02.756 "data_size": 63488 00:14:02.756 }, 00:14:02.756 { 00:14:02.756 "name": "BaseBdev4", 00:14:02.756 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:14:02.756 "is_configured": true, 00:14:02.756 "data_offset": 2048, 00:14:02.756 "data_size": 63488 00:14:02.756 } 00:14:02.756 ] 00:14:02.756 }' 00:14:02.756 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.756 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.015 "name": "raid_bdev1", 00:14:03.015 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:14:03.015 "strip_size_kb": 0, 00:14:03.015 "state": "online", 00:14:03.015 "raid_level": "raid1", 00:14:03.015 "superblock": true, 00:14:03.015 "num_base_bdevs": 4, 00:14:03.015 "num_base_bdevs_discovered": 2, 00:14:03.015 "num_base_bdevs_operational": 2, 00:14:03.015 "base_bdevs_list": [ 00:14:03.015 { 00:14:03.015 "name": null, 00:14:03.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.015 "is_configured": false, 00:14:03.015 "data_offset": 0, 00:14:03.015 "data_size": 63488 00:14:03.015 }, 00:14:03.015 { 00:14:03.015 "name": null, 00:14:03.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:03.015 "is_configured": false, 00:14:03.015 "data_offset": 2048, 00:14:03.015 "data_size": 63488 00:14:03.015 }, 00:14:03.015 { 00:14:03.015 "name": "BaseBdev3", 00:14:03.015 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:14:03.015 "is_configured": true, 00:14:03.015 "data_offset": 2048, 00:14:03.015 "data_size": 63488 00:14:03.015 }, 00:14:03.015 { 00:14:03.015 "name": "BaseBdev4", 00:14:03.015 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:14:03.015 "is_configured": true, 00:14:03.015 "data_offset": 2048, 00:14:03.015 "data_size": 63488 00:14:03.015 } 00:14:03.015 ] 00:14:03.015 }' 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:03.015 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.274 [2024-11-16 18:54:46.528780] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:03.274 [2024-11-16 18:54:46.528836] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.274 [2024-11-16 18:54:46.528857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:03.274 [2024-11-16 18:54:46.528866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.274 [2024-11-16 18:54:46.529285] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.274 [2024-11-16 18:54:46.529306] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:03.274 [2024-11-16 18:54:46.529388] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:03.274 [2024-11-16 18:54:46.529413] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:03.274 [2024-11-16 18:54:46.529422] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:03.274 [2024-11-16 18:54:46.529432] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:03.274 BaseBdev1 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.274 18:54:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:04.210 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:04.210 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.210 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.210 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.210 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.210 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:04.210 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.210 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.211 "name": "raid_bdev1", 00:14:04.211 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:14:04.211 "strip_size_kb": 0, 00:14:04.211 "state": "online", 00:14:04.211 "raid_level": "raid1", 00:14:04.211 "superblock": true, 00:14:04.211 "num_base_bdevs": 4, 00:14:04.211 "num_base_bdevs_discovered": 2, 00:14:04.211 "num_base_bdevs_operational": 2, 00:14:04.211 "base_bdevs_list": [ 00:14:04.211 { 00:14:04.211 "name": null, 00:14:04.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.211 "is_configured": false, 00:14:04.211 "data_offset": 0, 00:14:04.211 "data_size": 63488 00:14:04.211 }, 00:14:04.211 { 00:14:04.211 "name": null, 00:14:04.211 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.211 "is_configured": false, 00:14:04.211 "data_offset": 2048, 00:14:04.211 "data_size": 63488 00:14:04.211 }, 00:14:04.211 { 00:14:04.211 "name": "BaseBdev3", 00:14:04.211 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:14:04.211 "is_configured": true, 00:14:04.211 "data_offset": 2048, 00:14:04.211 "data_size": 63488 00:14:04.211 }, 00:14:04.211 { 00:14:04.211 "name": "BaseBdev4", 00:14:04.211 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:14:04.211 "is_configured": true, 00:14:04.211 "data_offset": 2048, 00:14:04.211 "data_size": 63488 00:14:04.211 } 00:14:04.211 ] 00:14:04.211 }' 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.211 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.778 18:54:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.778 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.778 "name": "raid_bdev1", 00:14:04.778 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:14:04.778 "strip_size_kb": 0, 00:14:04.778 "state": "online", 00:14:04.778 "raid_level": "raid1", 00:14:04.778 "superblock": true, 00:14:04.778 "num_base_bdevs": 4, 00:14:04.778 "num_base_bdevs_discovered": 2, 00:14:04.778 "num_base_bdevs_operational": 2, 00:14:04.778 "base_bdevs_list": [ 00:14:04.778 { 00:14:04.778 "name": null, 00:14:04.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.778 "is_configured": false, 00:14:04.778 "data_offset": 0, 00:14:04.778 "data_size": 63488 00:14:04.779 }, 00:14:04.779 { 00:14:04.779 "name": null, 00:14:04.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.779 "is_configured": false, 00:14:04.779 "data_offset": 2048, 00:14:04.779 "data_size": 63488 00:14:04.779 }, 00:14:04.779 { 00:14:04.779 "name": "BaseBdev3", 00:14:04.779 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:14:04.779 "is_configured": true, 00:14:04.779 "data_offset": 2048, 00:14:04.779 "data_size": 63488 00:14:04.779 }, 00:14:04.779 { 00:14:04.779 "name": "BaseBdev4", 00:14:04.779 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:14:04.779 "is_configured": true, 00:14:04.779 "data_offset": 2048, 00:14:04.779 "data_size": 63488 00:14:04.779 } 00:14:04.779 ] 00:14:04.779 }' 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.779 [2024-11-16 18:54:48.126264] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.779 [2024-11-16 18:54:48.126463] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:04.779 [2024-11-16 18:54:48.126483] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:04.779 request: 00:14:04.779 { 00:14:04.779 "base_bdev": "BaseBdev1", 00:14:04.779 "raid_bdev": "raid_bdev1", 00:14:04.779 "method": "bdev_raid_add_base_bdev", 00:14:04.779 "req_id": 1 00:14:04.779 } 00:14:04.779 Got JSON-RPC error response 00:14:04.779 response: 00:14:04.779 { 00:14:04.779 "code": -22, 00:14:04.779 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:04.779 } 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:04.779 18:54:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.719 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.720 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.720 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.982 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.982 "name": "raid_bdev1", 00:14:05.982 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:14:05.982 "strip_size_kb": 0, 00:14:05.982 "state": "online", 00:14:05.982 "raid_level": "raid1", 00:14:05.982 "superblock": true, 00:14:05.982 "num_base_bdevs": 4, 00:14:05.982 "num_base_bdevs_discovered": 2, 00:14:05.982 "num_base_bdevs_operational": 2, 00:14:05.982 "base_bdevs_list": [ 00:14:05.982 { 00:14:05.982 "name": null, 00:14:05.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.982 "is_configured": false, 00:14:05.982 "data_offset": 0, 00:14:05.982 "data_size": 63488 00:14:05.982 }, 00:14:05.982 { 00:14:05.982 "name": null, 00:14:05.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.982 "is_configured": false, 00:14:05.982 "data_offset": 2048, 00:14:05.982 "data_size": 63488 00:14:05.982 }, 00:14:05.982 { 00:14:05.982 "name": "BaseBdev3", 00:14:05.982 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:14:05.982 "is_configured": true, 00:14:05.982 "data_offset": 2048, 00:14:05.982 "data_size": 63488 00:14:05.982 }, 00:14:05.982 { 00:14:05.982 "name": "BaseBdev4", 00:14:05.982 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:14:05.982 "is_configured": true, 00:14:05.982 "data_offset": 2048, 00:14:05.982 "data_size": 63488 00:14:05.982 } 00:14:05.982 ] 00:14:05.982 }' 00:14:05.983 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.983 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.240 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.240 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.240 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.240 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.240 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.241 "name": "raid_bdev1", 00:14:06.241 "uuid": "cfd9382a-1efb-487d-9c54-29784a525e0d", 00:14:06.241 "strip_size_kb": 0, 00:14:06.241 "state": "online", 00:14:06.241 "raid_level": "raid1", 00:14:06.241 "superblock": true, 00:14:06.241 "num_base_bdevs": 4, 00:14:06.241 "num_base_bdevs_discovered": 2, 00:14:06.241 "num_base_bdevs_operational": 2, 00:14:06.241 "base_bdevs_list": [ 00:14:06.241 { 00:14:06.241 "name": null, 00:14:06.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.241 "is_configured": false, 00:14:06.241 "data_offset": 0, 00:14:06.241 "data_size": 63488 00:14:06.241 }, 00:14:06.241 { 00:14:06.241 "name": null, 00:14:06.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.241 "is_configured": false, 00:14:06.241 "data_offset": 2048, 00:14:06.241 "data_size": 63488 00:14:06.241 }, 00:14:06.241 { 00:14:06.241 "name": "BaseBdev3", 00:14:06.241 "uuid": "3f40ee9b-1742-53df-86c6-67f8e6af5264", 00:14:06.241 "is_configured": true, 00:14:06.241 "data_offset": 2048, 00:14:06.241 "data_size": 63488 00:14:06.241 }, 00:14:06.241 { 00:14:06.241 "name": "BaseBdev4", 00:14:06.241 "uuid": "17dd7cf6-6550-506a-b136-74046dc2c629", 00:14:06.241 "is_configured": true, 00:14:06.241 "data_offset": 2048, 00:14:06.241 "data_size": 63488 00:14:06.241 } 00:14:06.241 ] 00:14:06.241 }' 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78874 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78874 ']' 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78874 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:06.241 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78874 00:14:06.499 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:06.499 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:06.499 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78874' 00:14:06.499 killing process with pid 78874 00:14:06.499 Received shutdown signal, test time was about 17.680313 seconds 00:14:06.499 00:14:06.499 Latency(us) 00:14:06.500 [2024-11-16T18:54:49.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.500 [2024-11-16T18:54:49.972Z] =================================================================================================================== 00:14:06.500 [2024-11-16T18:54:49.972Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:06.500 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78874 00:14:06.500 [2024-11-16 18:54:49.737070] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.500 [2024-11-16 18:54:49.737198] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.500 18:54:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78874 00:14:06.500 [2024-11-16 18:54:49.737265] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.500 [2024-11-16 18:54:49.737276] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:06.758 [2024-11-16 18:54:50.125762] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:08.139 18:54:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:08.139 00:14:08.139 real 0m21.031s 00:14:08.139 user 0m27.386s 00:14:08.139 sys 0m2.504s 00:14:08.139 18:54:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:08.139 ************************************ 00:14:08.139 END TEST raid_rebuild_test_sb_io 00:14:08.139 ************************************ 00:14:08.139 18:54:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 18:54:51 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:08.139 18:54:51 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:08.139 18:54:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:08.139 18:54:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:08.139 18:54:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 ************************************ 00:14:08.139 START TEST raid5f_state_function_test 00:14:08.139 ************************************ 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79596 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:08.139 Process raid pid: 79596 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79596' 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79596 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79596 ']' 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:08.139 18:54:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.139 [2024-11-16 18:54:51.377360] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:08.139 [2024-11-16 18:54:51.377576] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.140 [2024-11-16 18:54:51.552925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.399 [2024-11-16 18:54:51.655249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.400 [2024-11-16 18:54:51.844387] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.400 [2024-11-16 18:54:51.844499] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.968 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:08.968 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:08.968 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:08.968 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.969 [2024-11-16 18:54:52.208097] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.969 [2024-11-16 18:54:52.208152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.969 [2024-11-16 18:54:52.208163] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.969 [2024-11-16 18:54:52.208172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.969 [2024-11-16 18:54:52.208183] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.969 [2024-11-16 18:54:52.208192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.969 "name": "Existed_Raid", 00:14:08.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.969 "strip_size_kb": 64, 00:14:08.969 "state": "configuring", 00:14:08.969 "raid_level": "raid5f", 00:14:08.969 "superblock": false, 00:14:08.969 "num_base_bdevs": 3, 00:14:08.969 "num_base_bdevs_discovered": 0, 00:14:08.969 "num_base_bdevs_operational": 3, 00:14:08.969 "base_bdevs_list": [ 00:14:08.969 { 00:14:08.969 "name": "BaseBdev1", 00:14:08.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.969 "is_configured": false, 00:14:08.969 "data_offset": 0, 00:14:08.969 "data_size": 0 00:14:08.969 }, 00:14:08.969 { 00:14:08.969 "name": "BaseBdev2", 00:14:08.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.969 "is_configured": false, 00:14:08.969 "data_offset": 0, 00:14:08.969 "data_size": 0 00:14:08.969 }, 00:14:08.969 { 00:14:08.969 "name": "BaseBdev3", 00:14:08.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.969 "is_configured": false, 00:14:08.969 "data_offset": 0, 00:14:08.969 "data_size": 0 00:14:08.969 } 00:14:08.969 ] 00:14:08.969 }' 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.969 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.228 [2024-11-16 18:54:52.659277] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.228 [2024-11-16 18:54:52.659359] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.228 [2024-11-16 18:54:52.671259] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:09.228 [2024-11-16 18:54:52.671343] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:09.228 [2024-11-16 18:54:52.671373] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.228 [2024-11-16 18:54:52.671396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.228 [2024-11-16 18:54:52.671414] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.228 [2024-11-16 18:54:52.671434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.228 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.488 [2024-11-16 18:54:52.716802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.488 BaseBdev1 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.488 [ 00:14:09.488 { 00:14:09.488 "name": "BaseBdev1", 00:14:09.488 "aliases": [ 00:14:09.488 "b7a7f3cd-6a82-457c-8ec0-5a1509a31882" 00:14:09.488 ], 00:14:09.488 "product_name": "Malloc disk", 00:14:09.488 "block_size": 512, 00:14:09.488 "num_blocks": 65536, 00:14:09.488 "uuid": "b7a7f3cd-6a82-457c-8ec0-5a1509a31882", 00:14:09.488 "assigned_rate_limits": { 00:14:09.488 "rw_ios_per_sec": 0, 00:14:09.488 "rw_mbytes_per_sec": 0, 00:14:09.488 "r_mbytes_per_sec": 0, 00:14:09.488 "w_mbytes_per_sec": 0 00:14:09.488 }, 00:14:09.488 "claimed": true, 00:14:09.488 "claim_type": "exclusive_write", 00:14:09.488 "zoned": false, 00:14:09.488 "supported_io_types": { 00:14:09.488 "read": true, 00:14:09.488 "write": true, 00:14:09.488 "unmap": true, 00:14:09.488 "flush": true, 00:14:09.488 "reset": true, 00:14:09.488 "nvme_admin": false, 00:14:09.488 "nvme_io": false, 00:14:09.488 "nvme_io_md": false, 00:14:09.488 "write_zeroes": true, 00:14:09.488 "zcopy": true, 00:14:09.488 "get_zone_info": false, 00:14:09.488 "zone_management": false, 00:14:09.488 "zone_append": false, 00:14:09.488 "compare": false, 00:14:09.488 "compare_and_write": false, 00:14:09.488 "abort": true, 00:14:09.488 "seek_hole": false, 00:14:09.488 "seek_data": false, 00:14:09.488 "copy": true, 00:14:09.488 "nvme_iov_md": false 00:14:09.488 }, 00:14:09.488 "memory_domains": [ 00:14:09.488 { 00:14:09.488 "dma_device_id": "system", 00:14:09.488 "dma_device_type": 1 00:14:09.488 }, 00:14:09.488 { 00:14:09.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.488 "dma_device_type": 2 00:14:09.488 } 00:14:09.488 ], 00:14:09.488 "driver_specific": {} 00:14:09.488 } 00:14:09.488 ] 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.488 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.488 "name": "Existed_Raid", 00:14:09.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.488 "strip_size_kb": 64, 00:14:09.488 "state": "configuring", 00:14:09.488 "raid_level": "raid5f", 00:14:09.488 "superblock": false, 00:14:09.488 "num_base_bdevs": 3, 00:14:09.488 "num_base_bdevs_discovered": 1, 00:14:09.488 "num_base_bdevs_operational": 3, 00:14:09.488 "base_bdevs_list": [ 00:14:09.488 { 00:14:09.488 "name": "BaseBdev1", 00:14:09.489 "uuid": "b7a7f3cd-6a82-457c-8ec0-5a1509a31882", 00:14:09.489 "is_configured": true, 00:14:09.489 "data_offset": 0, 00:14:09.489 "data_size": 65536 00:14:09.489 }, 00:14:09.489 { 00:14:09.489 "name": "BaseBdev2", 00:14:09.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.489 "is_configured": false, 00:14:09.489 "data_offset": 0, 00:14:09.489 "data_size": 0 00:14:09.489 }, 00:14:09.489 { 00:14:09.489 "name": "BaseBdev3", 00:14:09.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.489 "is_configured": false, 00:14:09.489 "data_offset": 0, 00:14:09.489 "data_size": 0 00:14:09.489 } 00:14:09.489 ] 00:14:09.489 }' 00:14:09.489 18:54:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.489 18:54:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.748 [2024-11-16 18:54:53.148121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.748 [2024-11-16 18:54:53.148176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.748 [2024-11-16 18:54:53.156149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.748 [2024-11-16 18:54:53.157958] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:09.748 [2024-11-16 18:54:53.158027] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:09.748 [2024-11-16 18:54:53.158060] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:09.748 [2024-11-16 18:54:53.158090] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:09.748 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.749 "name": "Existed_Raid", 00:14:09.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.749 "strip_size_kb": 64, 00:14:09.749 "state": "configuring", 00:14:09.749 "raid_level": "raid5f", 00:14:09.749 "superblock": false, 00:14:09.749 "num_base_bdevs": 3, 00:14:09.749 "num_base_bdevs_discovered": 1, 00:14:09.749 "num_base_bdevs_operational": 3, 00:14:09.749 "base_bdevs_list": [ 00:14:09.749 { 00:14:09.749 "name": "BaseBdev1", 00:14:09.749 "uuid": "b7a7f3cd-6a82-457c-8ec0-5a1509a31882", 00:14:09.749 "is_configured": true, 00:14:09.749 "data_offset": 0, 00:14:09.749 "data_size": 65536 00:14:09.749 }, 00:14:09.749 { 00:14:09.749 "name": "BaseBdev2", 00:14:09.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.749 "is_configured": false, 00:14:09.749 "data_offset": 0, 00:14:09.749 "data_size": 0 00:14:09.749 }, 00:14:09.749 { 00:14:09.749 "name": "BaseBdev3", 00:14:09.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.749 "is_configured": false, 00:14:09.749 "data_offset": 0, 00:14:09.749 "data_size": 0 00:14:09.749 } 00:14:09.749 ] 00:14:09.749 }' 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.749 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 [2024-11-16 18:54:53.608461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:10.320 BaseBdev2 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 [ 00:14:10.320 { 00:14:10.320 "name": "BaseBdev2", 00:14:10.320 "aliases": [ 00:14:10.320 "f6560fb6-905e-407c-aca7-19e23281ebab" 00:14:10.320 ], 00:14:10.320 "product_name": "Malloc disk", 00:14:10.320 "block_size": 512, 00:14:10.320 "num_blocks": 65536, 00:14:10.320 "uuid": "f6560fb6-905e-407c-aca7-19e23281ebab", 00:14:10.320 "assigned_rate_limits": { 00:14:10.320 "rw_ios_per_sec": 0, 00:14:10.320 "rw_mbytes_per_sec": 0, 00:14:10.320 "r_mbytes_per_sec": 0, 00:14:10.320 "w_mbytes_per_sec": 0 00:14:10.320 }, 00:14:10.320 "claimed": true, 00:14:10.320 "claim_type": "exclusive_write", 00:14:10.320 "zoned": false, 00:14:10.320 "supported_io_types": { 00:14:10.320 "read": true, 00:14:10.320 "write": true, 00:14:10.320 "unmap": true, 00:14:10.320 "flush": true, 00:14:10.320 "reset": true, 00:14:10.320 "nvme_admin": false, 00:14:10.320 "nvme_io": false, 00:14:10.320 "nvme_io_md": false, 00:14:10.320 "write_zeroes": true, 00:14:10.320 "zcopy": true, 00:14:10.320 "get_zone_info": false, 00:14:10.320 "zone_management": false, 00:14:10.320 "zone_append": false, 00:14:10.320 "compare": false, 00:14:10.320 "compare_and_write": false, 00:14:10.320 "abort": true, 00:14:10.320 "seek_hole": false, 00:14:10.320 "seek_data": false, 00:14:10.320 "copy": true, 00:14:10.320 "nvme_iov_md": false 00:14:10.320 }, 00:14:10.320 "memory_domains": [ 00:14:10.320 { 00:14:10.320 "dma_device_id": "system", 00:14:10.320 "dma_device_type": 1 00:14:10.320 }, 00:14:10.320 { 00:14:10.320 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.320 "dma_device_type": 2 00:14:10.320 } 00:14:10.320 ], 00:14:10.320 "driver_specific": {} 00:14:10.320 } 00:14:10.320 ] 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.320 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.320 "name": "Existed_Raid", 00:14:10.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.320 "strip_size_kb": 64, 00:14:10.321 "state": "configuring", 00:14:10.321 "raid_level": "raid5f", 00:14:10.321 "superblock": false, 00:14:10.321 "num_base_bdevs": 3, 00:14:10.321 "num_base_bdevs_discovered": 2, 00:14:10.321 "num_base_bdevs_operational": 3, 00:14:10.321 "base_bdevs_list": [ 00:14:10.321 { 00:14:10.321 "name": "BaseBdev1", 00:14:10.321 "uuid": "b7a7f3cd-6a82-457c-8ec0-5a1509a31882", 00:14:10.321 "is_configured": true, 00:14:10.321 "data_offset": 0, 00:14:10.321 "data_size": 65536 00:14:10.321 }, 00:14:10.321 { 00:14:10.321 "name": "BaseBdev2", 00:14:10.321 "uuid": "f6560fb6-905e-407c-aca7-19e23281ebab", 00:14:10.321 "is_configured": true, 00:14:10.321 "data_offset": 0, 00:14:10.321 "data_size": 65536 00:14:10.321 }, 00:14:10.321 { 00:14:10.321 "name": "BaseBdev3", 00:14:10.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.321 "is_configured": false, 00:14:10.321 "data_offset": 0, 00:14:10.321 "data_size": 0 00:14:10.321 } 00:14:10.321 ] 00:14:10.321 }' 00:14:10.321 18:54:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.321 18:54:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.889 [2024-11-16 18:54:54.150800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:10.889 [2024-11-16 18:54:54.150937] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:10.889 [2024-11-16 18:54:54.150967] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:10.889 [2024-11-16 18:54:54.151276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:10.889 [2024-11-16 18:54:54.156497] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:10.889 [2024-11-16 18:54:54.156555] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:10.889 [2024-11-16 18:54:54.156889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:10.889 BaseBdev3 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.889 [ 00:14:10.889 { 00:14:10.889 "name": "BaseBdev3", 00:14:10.889 "aliases": [ 00:14:10.889 "137babd8-946b-45c2-a922-9ed0b8059ad5" 00:14:10.889 ], 00:14:10.889 "product_name": "Malloc disk", 00:14:10.889 "block_size": 512, 00:14:10.889 "num_blocks": 65536, 00:14:10.889 "uuid": "137babd8-946b-45c2-a922-9ed0b8059ad5", 00:14:10.889 "assigned_rate_limits": { 00:14:10.889 "rw_ios_per_sec": 0, 00:14:10.889 "rw_mbytes_per_sec": 0, 00:14:10.889 "r_mbytes_per_sec": 0, 00:14:10.889 "w_mbytes_per_sec": 0 00:14:10.889 }, 00:14:10.889 "claimed": true, 00:14:10.889 "claim_type": "exclusive_write", 00:14:10.889 "zoned": false, 00:14:10.889 "supported_io_types": { 00:14:10.889 "read": true, 00:14:10.889 "write": true, 00:14:10.889 "unmap": true, 00:14:10.889 "flush": true, 00:14:10.889 "reset": true, 00:14:10.889 "nvme_admin": false, 00:14:10.889 "nvme_io": false, 00:14:10.889 "nvme_io_md": false, 00:14:10.889 "write_zeroes": true, 00:14:10.889 "zcopy": true, 00:14:10.889 "get_zone_info": false, 00:14:10.889 "zone_management": false, 00:14:10.889 "zone_append": false, 00:14:10.889 "compare": false, 00:14:10.889 "compare_and_write": false, 00:14:10.889 "abort": true, 00:14:10.889 "seek_hole": false, 00:14:10.889 "seek_data": false, 00:14:10.889 "copy": true, 00:14:10.889 "nvme_iov_md": false 00:14:10.889 }, 00:14:10.889 "memory_domains": [ 00:14:10.889 { 00:14:10.889 "dma_device_id": "system", 00:14:10.889 "dma_device_type": 1 00:14:10.889 }, 00:14:10.889 { 00:14:10.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.889 "dma_device_type": 2 00:14:10.889 } 00:14:10.889 ], 00:14:10.889 "driver_specific": {} 00:14:10.889 } 00:14:10.889 ] 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.889 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.890 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.890 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.890 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.890 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.890 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.890 "name": "Existed_Raid", 00:14:10.890 "uuid": "01d84209-6cab-4546-a9b1-213d3ff47801", 00:14:10.890 "strip_size_kb": 64, 00:14:10.890 "state": "online", 00:14:10.890 "raid_level": "raid5f", 00:14:10.890 "superblock": false, 00:14:10.890 "num_base_bdevs": 3, 00:14:10.890 "num_base_bdevs_discovered": 3, 00:14:10.890 "num_base_bdevs_operational": 3, 00:14:10.890 "base_bdevs_list": [ 00:14:10.890 { 00:14:10.890 "name": "BaseBdev1", 00:14:10.890 "uuid": "b7a7f3cd-6a82-457c-8ec0-5a1509a31882", 00:14:10.890 "is_configured": true, 00:14:10.890 "data_offset": 0, 00:14:10.890 "data_size": 65536 00:14:10.890 }, 00:14:10.890 { 00:14:10.890 "name": "BaseBdev2", 00:14:10.890 "uuid": "f6560fb6-905e-407c-aca7-19e23281ebab", 00:14:10.890 "is_configured": true, 00:14:10.890 "data_offset": 0, 00:14:10.890 "data_size": 65536 00:14:10.890 }, 00:14:10.890 { 00:14:10.890 "name": "BaseBdev3", 00:14:10.890 "uuid": "137babd8-946b-45c2-a922-9ed0b8059ad5", 00:14:10.890 "is_configured": true, 00:14:10.890 "data_offset": 0, 00:14:10.890 "data_size": 65536 00:14:10.890 } 00:14:10.890 ] 00:14:10.890 }' 00:14:10.890 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.890 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:11.151 [2024-11-16 18:54:54.602578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:11.151 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:11.410 "name": "Existed_Raid", 00:14:11.410 "aliases": [ 00:14:11.410 "01d84209-6cab-4546-a9b1-213d3ff47801" 00:14:11.410 ], 00:14:11.410 "product_name": "Raid Volume", 00:14:11.410 "block_size": 512, 00:14:11.410 "num_blocks": 131072, 00:14:11.410 "uuid": "01d84209-6cab-4546-a9b1-213d3ff47801", 00:14:11.410 "assigned_rate_limits": { 00:14:11.410 "rw_ios_per_sec": 0, 00:14:11.410 "rw_mbytes_per_sec": 0, 00:14:11.410 "r_mbytes_per_sec": 0, 00:14:11.410 "w_mbytes_per_sec": 0 00:14:11.410 }, 00:14:11.410 "claimed": false, 00:14:11.410 "zoned": false, 00:14:11.410 "supported_io_types": { 00:14:11.410 "read": true, 00:14:11.410 "write": true, 00:14:11.410 "unmap": false, 00:14:11.410 "flush": false, 00:14:11.410 "reset": true, 00:14:11.410 "nvme_admin": false, 00:14:11.410 "nvme_io": false, 00:14:11.410 "nvme_io_md": false, 00:14:11.410 "write_zeroes": true, 00:14:11.410 "zcopy": false, 00:14:11.410 "get_zone_info": false, 00:14:11.410 "zone_management": false, 00:14:11.410 "zone_append": false, 00:14:11.410 "compare": false, 00:14:11.410 "compare_and_write": false, 00:14:11.410 "abort": false, 00:14:11.410 "seek_hole": false, 00:14:11.410 "seek_data": false, 00:14:11.410 "copy": false, 00:14:11.410 "nvme_iov_md": false 00:14:11.410 }, 00:14:11.410 "driver_specific": { 00:14:11.410 "raid": { 00:14:11.410 "uuid": "01d84209-6cab-4546-a9b1-213d3ff47801", 00:14:11.410 "strip_size_kb": 64, 00:14:11.410 "state": "online", 00:14:11.410 "raid_level": "raid5f", 00:14:11.410 "superblock": false, 00:14:11.410 "num_base_bdevs": 3, 00:14:11.410 "num_base_bdevs_discovered": 3, 00:14:11.410 "num_base_bdevs_operational": 3, 00:14:11.410 "base_bdevs_list": [ 00:14:11.410 { 00:14:11.410 "name": "BaseBdev1", 00:14:11.410 "uuid": "b7a7f3cd-6a82-457c-8ec0-5a1509a31882", 00:14:11.410 "is_configured": true, 00:14:11.410 "data_offset": 0, 00:14:11.410 "data_size": 65536 00:14:11.410 }, 00:14:11.410 { 00:14:11.410 "name": "BaseBdev2", 00:14:11.410 "uuid": "f6560fb6-905e-407c-aca7-19e23281ebab", 00:14:11.410 "is_configured": true, 00:14:11.410 "data_offset": 0, 00:14:11.410 "data_size": 65536 00:14:11.410 }, 00:14:11.410 { 00:14:11.410 "name": "BaseBdev3", 00:14:11.410 "uuid": "137babd8-946b-45c2-a922-9ed0b8059ad5", 00:14:11.410 "is_configured": true, 00:14:11.410 "data_offset": 0, 00:14:11.410 "data_size": 65536 00:14:11.410 } 00:14:11.410 ] 00:14:11.410 } 00:14:11.410 } 00:14:11.410 }' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:11.410 BaseBdev2 00:14:11.410 BaseBdev3' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.410 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.670 [2024-11-16 18:54:54.905949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.670 18:54:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.670 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.670 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.670 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.670 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.670 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.670 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.670 "name": "Existed_Raid", 00:14:11.670 "uuid": "01d84209-6cab-4546-a9b1-213d3ff47801", 00:14:11.670 "strip_size_kb": 64, 00:14:11.670 "state": "online", 00:14:11.670 "raid_level": "raid5f", 00:14:11.670 "superblock": false, 00:14:11.670 "num_base_bdevs": 3, 00:14:11.670 "num_base_bdevs_discovered": 2, 00:14:11.670 "num_base_bdevs_operational": 2, 00:14:11.670 "base_bdevs_list": [ 00:14:11.670 { 00:14:11.670 "name": null, 00:14:11.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.670 "is_configured": false, 00:14:11.670 "data_offset": 0, 00:14:11.670 "data_size": 65536 00:14:11.670 }, 00:14:11.670 { 00:14:11.670 "name": "BaseBdev2", 00:14:11.670 "uuid": "f6560fb6-905e-407c-aca7-19e23281ebab", 00:14:11.670 "is_configured": true, 00:14:11.670 "data_offset": 0, 00:14:11.670 "data_size": 65536 00:14:11.670 }, 00:14:11.670 { 00:14:11.670 "name": "BaseBdev3", 00:14:11.670 "uuid": "137babd8-946b-45c2-a922-9ed0b8059ad5", 00:14:11.670 "is_configured": true, 00:14:11.670 "data_offset": 0, 00:14:11.670 "data_size": 65536 00:14:11.670 } 00:14:11.670 ] 00:14:11.670 }' 00:14:11.670 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.670 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.929 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:11.929 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:11.929 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.929 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:11.929 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.929 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.188 [2024-11-16 18:54:55.452138] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.188 [2024-11-16 18:54:55.452233] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:12.188 [2024-11-16 18:54:55.543771] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.188 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.188 [2024-11-16 18:54:55.603712] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:12.188 [2024-11-16 18:54:55.603793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.447 BaseBdev2 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.447 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.447 [ 00:14:12.447 { 00:14:12.447 "name": "BaseBdev2", 00:14:12.447 "aliases": [ 00:14:12.447 "a2e174b8-c898-4646-a00b-8da57fe33441" 00:14:12.447 ], 00:14:12.447 "product_name": "Malloc disk", 00:14:12.447 "block_size": 512, 00:14:12.447 "num_blocks": 65536, 00:14:12.447 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:12.447 "assigned_rate_limits": { 00:14:12.447 "rw_ios_per_sec": 0, 00:14:12.447 "rw_mbytes_per_sec": 0, 00:14:12.447 "r_mbytes_per_sec": 0, 00:14:12.447 "w_mbytes_per_sec": 0 00:14:12.447 }, 00:14:12.448 "claimed": false, 00:14:12.448 "zoned": false, 00:14:12.448 "supported_io_types": { 00:14:12.448 "read": true, 00:14:12.448 "write": true, 00:14:12.448 "unmap": true, 00:14:12.448 "flush": true, 00:14:12.448 "reset": true, 00:14:12.448 "nvme_admin": false, 00:14:12.448 "nvme_io": false, 00:14:12.448 "nvme_io_md": false, 00:14:12.448 "write_zeroes": true, 00:14:12.448 "zcopy": true, 00:14:12.448 "get_zone_info": false, 00:14:12.448 "zone_management": false, 00:14:12.448 "zone_append": false, 00:14:12.448 "compare": false, 00:14:12.448 "compare_and_write": false, 00:14:12.448 "abort": true, 00:14:12.448 "seek_hole": false, 00:14:12.448 "seek_data": false, 00:14:12.448 "copy": true, 00:14:12.448 "nvme_iov_md": false 00:14:12.448 }, 00:14:12.448 "memory_domains": [ 00:14:12.448 { 00:14:12.448 "dma_device_id": "system", 00:14:12.448 "dma_device_type": 1 00:14:12.448 }, 00:14:12.448 { 00:14:12.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.448 "dma_device_type": 2 00:14:12.448 } 00:14:12.448 ], 00:14:12.448 "driver_specific": {} 00:14:12.448 } 00:14:12.448 ] 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 BaseBdev3 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 [ 00:14:12.448 { 00:14:12.448 "name": "BaseBdev3", 00:14:12.448 "aliases": [ 00:14:12.448 "0dd4626c-26de-44c5-ab6e-acdf7b712850" 00:14:12.448 ], 00:14:12.448 "product_name": "Malloc disk", 00:14:12.448 "block_size": 512, 00:14:12.448 "num_blocks": 65536, 00:14:12.448 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:12.448 "assigned_rate_limits": { 00:14:12.448 "rw_ios_per_sec": 0, 00:14:12.448 "rw_mbytes_per_sec": 0, 00:14:12.448 "r_mbytes_per_sec": 0, 00:14:12.448 "w_mbytes_per_sec": 0 00:14:12.448 }, 00:14:12.448 "claimed": false, 00:14:12.448 "zoned": false, 00:14:12.448 "supported_io_types": { 00:14:12.448 "read": true, 00:14:12.448 "write": true, 00:14:12.448 "unmap": true, 00:14:12.448 "flush": true, 00:14:12.448 "reset": true, 00:14:12.448 "nvme_admin": false, 00:14:12.448 "nvme_io": false, 00:14:12.448 "nvme_io_md": false, 00:14:12.448 "write_zeroes": true, 00:14:12.448 "zcopy": true, 00:14:12.448 "get_zone_info": false, 00:14:12.448 "zone_management": false, 00:14:12.448 "zone_append": false, 00:14:12.448 "compare": false, 00:14:12.448 "compare_and_write": false, 00:14:12.448 "abort": true, 00:14:12.448 "seek_hole": false, 00:14:12.448 "seek_data": false, 00:14:12.448 "copy": true, 00:14:12.448 "nvme_iov_md": false 00:14:12.448 }, 00:14:12.448 "memory_domains": [ 00:14:12.448 { 00:14:12.448 "dma_device_id": "system", 00:14:12.448 "dma_device_type": 1 00:14:12.448 }, 00:14:12.448 { 00:14:12.448 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:12.448 "dma_device_type": 2 00:14:12.448 } 00:14:12.448 ], 00:14:12.448 "driver_specific": {} 00:14:12.448 } 00:14:12.448 ] 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 [2024-11-16 18:54:55.890615] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:12.448 [2024-11-16 18:54:55.890721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:12.448 [2024-11-16 18:54:55.890768] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.448 [2024-11-16 18:54:55.892513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.448 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.707 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.707 "name": "Existed_Raid", 00:14:12.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.707 "strip_size_kb": 64, 00:14:12.707 "state": "configuring", 00:14:12.707 "raid_level": "raid5f", 00:14:12.707 "superblock": false, 00:14:12.707 "num_base_bdevs": 3, 00:14:12.707 "num_base_bdevs_discovered": 2, 00:14:12.707 "num_base_bdevs_operational": 3, 00:14:12.707 "base_bdevs_list": [ 00:14:12.707 { 00:14:12.707 "name": "BaseBdev1", 00:14:12.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.707 "is_configured": false, 00:14:12.707 "data_offset": 0, 00:14:12.707 "data_size": 0 00:14:12.707 }, 00:14:12.707 { 00:14:12.707 "name": "BaseBdev2", 00:14:12.707 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:12.707 "is_configured": true, 00:14:12.707 "data_offset": 0, 00:14:12.707 "data_size": 65536 00:14:12.707 }, 00:14:12.707 { 00:14:12.707 "name": "BaseBdev3", 00:14:12.707 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:12.707 "is_configured": true, 00:14:12.707 "data_offset": 0, 00:14:12.707 "data_size": 65536 00:14:12.707 } 00:14:12.707 ] 00:14:12.707 }' 00:14:12.707 18:54:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.707 18:54:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.966 [2024-11-16 18:54:56.341806] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.966 "name": "Existed_Raid", 00:14:12.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.966 "strip_size_kb": 64, 00:14:12.966 "state": "configuring", 00:14:12.966 "raid_level": "raid5f", 00:14:12.966 "superblock": false, 00:14:12.966 "num_base_bdevs": 3, 00:14:12.966 "num_base_bdevs_discovered": 1, 00:14:12.966 "num_base_bdevs_operational": 3, 00:14:12.966 "base_bdevs_list": [ 00:14:12.966 { 00:14:12.966 "name": "BaseBdev1", 00:14:12.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.966 "is_configured": false, 00:14:12.966 "data_offset": 0, 00:14:12.966 "data_size": 0 00:14:12.966 }, 00:14:12.966 { 00:14:12.966 "name": null, 00:14:12.966 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:12.966 "is_configured": false, 00:14:12.966 "data_offset": 0, 00:14:12.966 "data_size": 65536 00:14:12.966 }, 00:14:12.966 { 00:14:12.966 "name": "BaseBdev3", 00:14:12.966 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:12.966 "is_configured": true, 00:14:12.966 "data_offset": 0, 00:14:12.966 "data_size": 65536 00:14:12.966 } 00:14:12.966 ] 00:14:12.966 }' 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.966 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.534 [2024-11-16 18:54:56.828376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:13.534 BaseBdev1 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.534 [ 00:14:13.534 { 00:14:13.534 "name": "BaseBdev1", 00:14:13.534 "aliases": [ 00:14:13.534 "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1" 00:14:13.534 ], 00:14:13.534 "product_name": "Malloc disk", 00:14:13.534 "block_size": 512, 00:14:13.534 "num_blocks": 65536, 00:14:13.534 "uuid": "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1", 00:14:13.534 "assigned_rate_limits": { 00:14:13.534 "rw_ios_per_sec": 0, 00:14:13.534 "rw_mbytes_per_sec": 0, 00:14:13.534 "r_mbytes_per_sec": 0, 00:14:13.534 "w_mbytes_per_sec": 0 00:14:13.534 }, 00:14:13.534 "claimed": true, 00:14:13.534 "claim_type": "exclusive_write", 00:14:13.534 "zoned": false, 00:14:13.534 "supported_io_types": { 00:14:13.534 "read": true, 00:14:13.534 "write": true, 00:14:13.534 "unmap": true, 00:14:13.534 "flush": true, 00:14:13.534 "reset": true, 00:14:13.534 "nvme_admin": false, 00:14:13.534 "nvme_io": false, 00:14:13.534 "nvme_io_md": false, 00:14:13.534 "write_zeroes": true, 00:14:13.534 "zcopy": true, 00:14:13.534 "get_zone_info": false, 00:14:13.534 "zone_management": false, 00:14:13.534 "zone_append": false, 00:14:13.534 "compare": false, 00:14:13.534 "compare_and_write": false, 00:14:13.534 "abort": true, 00:14:13.534 "seek_hole": false, 00:14:13.534 "seek_data": false, 00:14:13.534 "copy": true, 00:14:13.534 "nvme_iov_md": false 00:14:13.534 }, 00:14:13.534 "memory_domains": [ 00:14:13.534 { 00:14:13.534 "dma_device_id": "system", 00:14:13.534 "dma_device_type": 1 00:14:13.534 }, 00:14:13.534 { 00:14:13.534 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.534 "dma_device_type": 2 00:14:13.534 } 00:14:13.534 ], 00:14:13.534 "driver_specific": {} 00:14:13.534 } 00:14:13.534 ] 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.534 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.534 "name": "Existed_Raid", 00:14:13.534 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.534 "strip_size_kb": 64, 00:14:13.534 "state": "configuring", 00:14:13.534 "raid_level": "raid5f", 00:14:13.534 "superblock": false, 00:14:13.534 "num_base_bdevs": 3, 00:14:13.534 "num_base_bdevs_discovered": 2, 00:14:13.534 "num_base_bdevs_operational": 3, 00:14:13.534 "base_bdevs_list": [ 00:14:13.534 { 00:14:13.534 "name": "BaseBdev1", 00:14:13.534 "uuid": "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1", 00:14:13.534 "is_configured": true, 00:14:13.535 "data_offset": 0, 00:14:13.535 "data_size": 65536 00:14:13.535 }, 00:14:13.535 { 00:14:13.535 "name": null, 00:14:13.535 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:13.535 "is_configured": false, 00:14:13.535 "data_offset": 0, 00:14:13.535 "data_size": 65536 00:14:13.535 }, 00:14:13.535 { 00:14:13.535 "name": "BaseBdev3", 00:14:13.535 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:13.535 "is_configured": true, 00:14:13.535 "data_offset": 0, 00:14:13.535 "data_size": 65536 00:14:13.535 } 00:14:13.535 ] 00:14:13.535 }' 00:14:13.535 18:54:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.535 18:54:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.102 [2024-11-16 18:54:57.335558] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.102 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.103 "name": "Existed_Raid", 00:14:14.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.103 "strip_size_kb": 64, 00:14:14.103 "state": "configuring", 00:14:14.103 "raid_level": "raid5f", 00:14:14.103 "superblock": false, 00:14:14.103 "num_base_bdevs": 3, 00:14:14.103 "num_base_bdevs_discovered": 1, 00:14:14.103 "num_base_bdevs_operational": 3, 00:14:14.103 "base_bdevs_list": [ 00:14:14.103 { 00:14:14.103 "name": "BaseBdev1", 00:14:14.103 "uuid": "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1", 00:14:14.103 "is_configured": true, 00:14:14.103 "data_offset": 0, 00:14:14.103 "data_size": 65536 00:14:14.103 }, 00:14:14.103 { 00:14:14.103 "name": null, 00:14:14.103 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:14.103 "is_configured": false, 00:14:14.103 "data_offset": 0, 00:14:14.103 "data_size": 65536 00:14:14.103 }, 00:14:14.103 { 00:14:14.103 "name": null, 00:14:14.103 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:14.103 "is_configured": false, 00:14:14.103 "data_offset": 0, 00:14:14.103 "data_size": 65536 00:14:14.103 } 00:14:14.103 ] 00:14:14.103 }' 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.103 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.361 [2024-11-16 18:54:57.806765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:14.361 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.362 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.362 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.362 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.362 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.362 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.362 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.362 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.362 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.620 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.620 "name": "Existed_Raid", 00:14:14.620 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.620 "strip_size_kb": 64, 00:14:14.620 "state": "configuring", 00:14:14.620 "raid_level": "raid5f", 00:14:14.620 "superblock": false, 00:14:14.620 "num_base_bdevs": 3, 00:14:14.620 "num_base_bdevs_discovered": 2, 00:14:14.620 "num_base_bdevs_operational": 3, 00:14:14.620 "base_bdevs_list": [ 00:14:14.620 { 00:14:14.620 "name": "BaseBdev1", 00:14:14.620 "uuid": "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1", 00:14:14.620 "is_configured": true, 00:14:14.620 "data_offset": 0, 00:14:14.620 "data_size": 65536 00:14:14.620 }, 00:14:14.620 { 00:14:14.620 "name": null, 00:14:14.620 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:14.620 "is_configured": false, 00:14:14.620 "data_offset": 0, 00:14:14.620 "data_size": 65536 00:14:14.620 }, 00:14:14.620 { 00:14:14.620 "name": "BaseBdev3", 00:14:14.620 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:14.620 "is_configured": true, 00:14:14.620 "data_offset": 0, 00:14:14.620 "data_size": 65536 00:14:14.620 } 00:14:14.620 ] 00:14:14.620 }' 00:14:14.620 18:54:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.620 18:54:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.879 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:14.879 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.879 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.879 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.879 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.879 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:14.879 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:14.879 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.879 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.879 [2024-11-16 18:54:58.317899] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.138 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.139 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.139 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.139 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.139 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.139 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.139 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.139 "name": "Existed_Raid", 00:14:15.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.139 "strip_size_kb": 64, 00:14:15.139 "state": "configuring", 00:14:15.139 "raid_level": "raid5f", 00:14:15.139 "superblock": false, 00:14:15.139 "num_base_bdevs": 3, 00:14:15.139 "num_base_bdevs_discovered": 1, 00:14:15.139 "num_base_bdevs_operational": 3, 00:14:15.139 "base_bdevs_list": [ 00:14:15.139 { 00:14:15.139 "name": null, 00:14:15.139 "uuid": "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1", 00:14:15.139 "is_configured": false, 00:14:15.139 "data_offset": 0, 00:14:15.139 "data_size": 65536 00:14:15.139 }, 00:14:15.139 { 00:14:15.139 "name": null, 00:14:15.139 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:15.139 "is_configured": false, 00:14:15.139 "data_offset": 0, 00:14:15.139 "data_size": 65536 00:14:15.139 }, 00:14:15.139 { 00:14:15.139 "name": "BaseBdev3", 00:14:15.139 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:15.139 "is_configured": true, 00:14:15.139 "data_offset": 0, 00:14:15.139 "data_size": 65536 00:14:15.139 } 00:14:15.139 ] 00:14:15.139 }' 00:14:15.139 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.139 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.398 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.398 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:15.398 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.398 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.398 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.398 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.399 [2024-11-16 18:54:58.837007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.399 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.658 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.658 "name": "Existed_Raid", 00:14:15.658 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.658 "strip_size_kb": 64, 00:14:15.658 "state": "configuring", 00:14:15.658 "raid_level": "raid5f", 00:14:15.658 "superblock": false, 00:14:15.658 "num_base_bdevs": 3, 00:14:15.658 "num_base_bdevs_discovered": 2, 00:14:15.658 "num_base_bdevs_operational": 3, 00:14:15.658 "base_bdevs_list": [ 00:14:15.658 { 00:14:15.658 "name": null, 00:14:15.658 "uuid": "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1", 00:14:15.658 "is_configured": false, 00:14:15.658 "data_offset": 0, 00:14:15.658 "data_size": 65536 00:14:15.658 }, 00:14:15.658 { 00:14:15.658 "name": "BaseBdev2", 00:14:15.658 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:15.658 "is_configured": true, 00:14:15.658 "data_offset": 0, 00:14:15.658 "data_size": 65536 00:14:15.658 }, 00:14:15.658 { 00:14:15.658 "name": "BaseBdev3", 00:14:15.658 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:15.658 "is_configured": true, 00:14:15.658 "data_offset": 0, 00:14:15.658 "data_size": 65536 00:14:15.658 } 00:14:15.658 ] 00:14:15.658 }' 00:14:15.658 18:54:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.658 18:54:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6c4753f4-bf49-4e2f-a33e-4916eb43c5e1 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.917 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.918 [2024-11-16 18:54:59.378878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:15.918 [2024-11-16 18:54:59.378922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:15.918 [2024-11-16 18:54:59.378948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:15.918 [2024-11-16 18:54:59.379184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:15.918 [2024-11-16 18:54:59.384034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:15.918 [2024-11-16 18:54:59.384057] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:15.918 [2024-11-16 18:54:59.384308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.918 NewBaseBdev 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.918 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.180 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.180 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:16.180 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.180 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.180 [ 00:14:16.180 { 00:14:16.180 "name": "NewBaseBdev", 00:14:16.180 "aliases": [ 00:14:16.180 "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1" 00:14:16.180 ], 00:14:16.180 "product_name": "Malloc disk", 00:14:16.180 "block_size": 512, 00:14:16.180 "num_blocks": 65536, 00:14:16.180 "uuid": "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1", 00:14:16.180 "assigned_rate_limits": { 00:14:16.180 "rw_ios_per_sec": 0, 00:14:16.180 "rw_mbytes_per_sec": 0, 00:14:16.180 "r_mbytes_per_sec": 0, 00:14:16.180 "w_mbytes_per_sec": 0 00:14:16.180 }, 00:14:16.180 "claimed": true, 00:14:16.180 "claim_type": "exclusive_write", 00:14:16.180 "zoned": false, 00:14:16.180 "supported_io_types": { 00:14:16.180 "read": true, 00:14:16.180 "write": true, 00:14:16.180 "unmap": true, 00:14:16.180 "flush": true, 00:14:16.180 "reset": true, 00:14:16.180 "nvme_admin": false, 00:14:16.180 "nvme_io": false, 00:14:16.180 "nvme_io_md": false, 00:14:16.180 "write_zeroes": true, 00:14:16.180 "zcopy": true, 00:14:16.180 "get_zone_info": false, 00:14:16.180 "zone_management": false, 00:14:16.180 "zone_append": false, 00:14:16.180 "compare": false, 00:14:16.180 "compare_and_write": false, 00:14:16.180 "abort": true, 00:14:16.180 "seek_hole": false, 00:14:16.180 "seek_data": false, 00:14:16.180 "copy": true, 00:14:16.180 "nvme_iov_md": false 00:14:16.180 }, 00:14:16.180 "memory_domains": [ 00:14:16.180 { 00:14:16.180 "dma_device_id": "system", 00:14:16.180 "dma_device_type": 1 00:14:16.180 }, 00:14:16.180 { 00:14:16.180 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.181 "dma_device_type": 2 00:14:16.181 } 00:14:16.181 ], 00:14:16.181 "driver_specific": {} 00:14:16.181 } 00:14:16.181 ] 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.181 "name": "Existed_Raid", 00:14:16.181 "uuid": "75ade7b2-70dd-4bb3-95ea-5ac6ce73cedc", 00:14:16.181 "strip_size_kb": 64, 00:14:16.181 "state": "online", 00:14:16.181 "raid_level": "raid5f", 00:14:16.181 "superblock": false, 00:14:16.181 "num_base_bdevs": 3, 00:14:16.181 "num_base_bdevs_discovered": 3, 00:14:16.181 "num_base_bdevs_operational": 3, 00:14:16.181 "base_bdevs_list": [ 00:14:16.181 { 00:14:16.181 "name": "NewBaseBdev", 00:14:16.181 "uuid": "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1", 00:14:16.181 "is_configured": true, 00:14:16.181 "data_offset": 0, 00:14:16.181 "data_size": 65536 00:14:16.181 }, 00:14:16.181 { 00:14:16.181 "name": "BaseBdev2", 00:14:16.181 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:16.181 "is_configured": true, 00:14:16.181 "data_offset": 0, 00:14:16.181 "data_size": 65536 00:14:16.181 }, 00:14:16.181 { 00:14:16.181 "name": "BaseBdev3", 00:14:16.181 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:16.181 "is_configured": true, 00:14:16.181 "data_offset": 0, 00:14:16.181 "data_size": 65536 00:14:16.181 } 00:14:16.181 ] 00:14:16.181 }' 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.181 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.443 [2024-11-16 18:54:59.857770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:16.443 "name": "Existed_Raid", 00:14:16.443 "aliases": [ 00:14:16.443 "75ade7b2-70dd-4bb3-95ea-5ac6ce73cedc" 00:14:16.443 ], 00:14:16.443 "product_name": "Raid Volume", 00:14:16.443 "block_size": 512, 00:14:16.443 "num_blocks": 131072, 00:14:16.443 "uuid": "75ade7b2-70dd-4bb3-95ea-5ac6ce73cedc", 00:14:16.443 "assigned_rate_limits": { 00:14:16.443 "rw_ios_per_sec": 0, 00:14:16.443 "rw_mbytes_per_sec": 0, 00:14:16.443 "r_mbytes_per_sec": 0, 00:14:16.443 "w_mbytes_per_sec": 0 00:14:16.443 }, 00:14:16.443 "claimed": false, 00:14:16.443 "zoned": false, 00:14:16.443 "supported_io_types": { 00:14:16.443 "read": true, 00:14:16.443 "write": true, 00:14:16.443 "unmap": false, 00:14:16.443 "flush": false, 00:14:16.443 "reset": true, 00:14:16.443 "nvme_admin": false, 00:14:16.443 "nvme_io": false, 00:14:16.443 "nvme_io_md": false, 00:14:16.443 "write_zeroes": true, 00:14:16.443 "zcopy": false, 00:14:16.443 "get_zone_info": false, 00:14:16.443 "zone_management": false, 00:14:16.443 "zone_append": false, 00:14:16.443 "compare": false, 00:14:16.443 "compare_and_write": false, 00:14:16.443 "abort": false, 00:14:16.443 "seek_hole": false, 00:14:16.443 "seek_data": false, 00:14:16.443 "copy": false, 00:14:16.443 "nvme_iov_md": false 00:14:16.443 }, 00:14:16.443 "driver_specific": { 00:14:16.443 "raid": { 00:14:16.443 "uuid": "75ade7b2-70dd-4bb3-95ea-5ac6ce73cedc", 00:14:16.443 "strip_size_kb": 64, 00:14:16.443 "state": "online", 00:14:16.443 "raid_level": "raid5f", 00:14:16.443 "superblock": false, 00:14:16.443 "num_base_bdevs": 3, 00:14:16.443 "num_base_bdevs_discovered": 3, 00:14:16.443 "num_base_bdevs_operational": 3, 00:14:16.443 "base_bdevs_list": [ 00:14:16.443 { 00:14:16.443 "name": "NewBaseBdev", 00:14:16.443 "uuid": "6c4753f4-bf49-4e2f-a33e-4916eb43c5e1", 00:14:16.443 "is_configured": true, 00:14:16.443 "data_offset": 0, 00:14:16.443 "data_size": 65536 00:14:16.443 }, 00:14:16.443 { 00:14:16.443 "name": "BaseBdev2", 00:14:16.443 "uuid": "a2e174b8-c898-4646-a00b-8da57fe33441", 00:14:16.443 "is_configured": true, 00:14:16.443 "data_offset": 0, 00:14:16.443 "data_size": 65536 00:14:16.443 }, 00:14:16.443 { 00:14:16.443 "name": "BaseBdev3", 00:14:16.443 "uuid": "0dd4626c-26de-44c5-ab6e-acdf7b712850", 00:14:16.443 "is_configured": true, 00:14:16.443 "data_offset": 0, 00:14:16.443 "data_size": 65536 00:14:16.443 } 00:14:16.443 ] 00:14:16.443 } 00:14:16.443 } 00:14:16.443 }' 00:14:16.443 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.702 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:16.702 BaseBdev2 00:14:16.702 BaseBdev3' 00:14:16.702 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.702 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:16.702 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.703 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.703 18:54:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:16.703 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.703 18:54:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.703 [2024-11-16 18:55:00.109154] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:16.703 [2024-11-16 18:55:00.109182] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.703 [2024-11-16 18:55:00.109241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.703 [2024-11-16 18:55:00.109500] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.703 [2024-11-16 18:55:00.109521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79596 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79596 ']' 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79596 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79596 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:16.703 killing process with pid 79596 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79596' 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79596 00:14:16.703 [2024-11-16 18:55:00.158013] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.703 18:55:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79596 00:14:17.272 [2024-11-16 18:55:00.444333] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:14:18.215 00:14:18.215 real 0m10.189s 00:14:18.215 user 0m16.233s 00:14:18.215 sys 0m1.809s 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.215 ************************************ 00:14:18.215 END TEST raid5f_state_function_test 00:14:18.215 ************************************ 00:14:18.215 18:55:01 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:14:18.215 18:55:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:18.215 18:55:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.215 18:55:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.215 ************************************ 00:14:18.215 START TEST raid5f_state_function_test_sb 00:14:18.215 ************************************ 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80216 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:18.215 Process raid pid: 80216 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80216' 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80216 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80216 ']' 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.215 18:55:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.215 [2024-11-16 18:55:01.635477] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:18.215 [2024-11-16 18:55:01.635584] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.475 [2024-11-16 18:55:01.807424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.475 [2024-11-16 18:55:01.905551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.738 [2024-11-16 18:55:02.101912] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.738 [2024-11-16 18:55:02.101949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:18.998 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.998 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:18.998 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:18.998 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.998 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.998 [2024-11-16 18:55:02.441453] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:18.998 [2024-11-16 18:55:02.441498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:18.998 [2024-11-16 18:55:02.441507] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:18.998 [2024-11-16 18:55:02.441516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:18.998 [2024-11-16 18:55:02.441523] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:18.998 [2024-11-16 18:55:02.441531] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:18.998 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.998 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:18.998 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:18.998 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.999 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.258 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.258 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.258 "name": "Existed_Raid", 00:14:19.258 "uuid": "068f00f9-4708-4b0e-928b-95d9ac49dd9b", 00:14:19.258 "strip_size_kb": 64, 00:14:19.258 "state": "configuring", 00:14:19.258 "raid_level": "raid5f", 00:14:19.258 "superblock": true, 00:14:19.258 "num_base_bdevs": 3, 00:14:19.258 "num_base_bdevs_discovered": 0, 00:14:19.258 "num_base_bdevs_operational": 3, 00:14:19.258 "base_bdevs_list": [ 00:14:19.258 { 00:14:19.258 "name": "BaseBdev1", 00:14:19.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.258 "is_configured": false, 00:14:19.258 "data_offset": 0, 00:14:19.258 "data_size": 0 00:14:19.258 }, 00:14:19.258 { 00:14:19.258 "name": "BaseBdev2", 00:14:19.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.258 "is_configured": false, 00:14:19.258 "data_offset": 0, 00:14:19.258 "data_size": 0 00:14:19.258 }, 00:14:19.258 { 00:14:19.258 "name": "BaseBdev3", 00:14:19.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.258 "is_configured": false, 00:14:19.258 "data_offset": 0, 00:14:19.258 "data_size": 0 00:14:19.258 } 00:14:19.258 ] 00:14:19.258 }' 00:14:19.258 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.258 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.519 [2024-11-16 18:55:02.876666] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:19.519 [2024-11-16 18:55:02.876702] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.519 [2024-11-16 18:55:02.888629] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:19.519 [2024-11-16 18:55:02.888680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:19.519 [2024-11-16 18:55:02.888689] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:19.519 [2024-11-16 18:55:02.888697] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:19.519 [2024-11-16 18:55:02.888703] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:19.519 [2024-11-16 18:55:02.888711] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.519 [2024-11-16 18:55:02.934229] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.519 BaseBdev1 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.519 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.520 [ 00:14:19.520 { 00:14:19.520 "name": "BaseBdev1", 00:14:19.520 "aliases": [ 00:14:19.520 "9058fdf1-e7f6-4a03-9732-7393063ac74d" 00:14:19.520 ], 00:14:19.520 "product_name": "Malloc disk", 00:14:19.520 "block_size": 512, 00:14:19.520 "num_blocks": 65536, 00:14:19.520 "uuid": "9058fdf1-e7f6-4a03-9732-7393063ac74d", 00:14:19.520 "assigned_rate_limits": { 00:14:19.520 "rw_ios_per_sec": 0, 00:14:19.520 "rw_mbytes_per_sec": 0, 00:14:19.520 "r_mbytes_per_sec": 0, 00:14:19.520 "w_mbytes_per_sec": 0 00:14:19.520 }, 00:14:19.520 "claimed": true, 00:14:19.520 "claim_type": "exclusive_write", 00:14:19.520 "zoned": false, 00:14:19.520 "supported_io_types": { 00:14:19.520 "read": true, 00:14:19.520 "write": true, 00:14:19.520 "unmap": true, 00:14:19.520 "flush": true, 00:14:19.520 "reset": true, 00:14:19.520 "nvme_admin": false, 00:14:19.520 "nvme_io": false, 00:14:19.520 "nvme_io_md": false, 00:14:19.520 "write_zeroes": true, 00:14:19.520 "zcopy": true, 00:14:19.520 "get_zone_info": false, 00:14:19.520 "zone_management": false, 00:14:19.520 "zone_append": false, 00:14:19.520 "compare": false, 00:14:19.520 "compare_and_write": false, 00:14:19.520 "abort": true, 00:14:19.520 "seek_hole": false, 00:14:19.520 "seek_data": false, 00:14:19.520 "copy": true, 00:14:19.520 "nvme_iov_md": false 00:14:19.520 }, 00:14:19.520 "memory_domains": [ 00:14:19.520 { 00:14:19.520 "dma_device_id": "system", 00:14:19.520 "dma_device_type": 1 00:14:19.520 }, 00:14:19.520 { 00:14:19.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.520 "dma_device_type": 2 00:14:19.520 } 00:14:19.520 ], 00:14:19.520 "driver_specific": {} 00:14:19.520 } 00:14:19.520 ] 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.520 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.781 18:55:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.781 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.781 "name": "Existed_Raid", 00:14:19.781 "uuid": "3e65fca6-df82-45d0-b8b6-f6012576f054", 00:14:19.781 "strip_size_kb": 64, 00:14:19.781 "state": "configuring", 00:14:19.781 "raid_level": "raid5f", 00:14:19.781 "superblock": true, 00:14:19.781 "num_base_bdevs": 3, 00:14:19.781 "num_base_bdevs_discovered": 1, 00:14:19.781 "num_base_bdevs_operational": 3, 00:14:19.781 "base_bdevs_list": [ 00:14:19.781 { 00:14:19.781 "name": "BaseBdev1", 00:14:19.781 "uuid": "9058fdf1-e7f6-4a03-9732-7393063ac74d", 00:14:19.781 "is_configured": true, 00:14:19.781 "data_offset": 2048, 00:14:19.781 "data_size": 63488 00:14:19.781 }, 00:14:19.781 { 00:14:19.781 "name": "BaseBdev2", 00:14:19.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.781 "is_configured": false, 00:14:19.781 "data_offset": 0, 00:14:19.781 "data_size": 0 00:14:19.781 }, 00:14:19.781 { 00:14:19.781 "name": "BaseBdev3", 00:14:19.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.781 "is_configured": false, 00:14:19.781 "data_offset": 0, 00:14:19.781 "data_size": 0 00:14:19.781 } 00:14:19.781 ] 00:14:19.781 }' 00:14:19.781 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.781 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.041 [2024-11-16 18:55:03.401442] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:20.041 [2024-11-16 18:55:03.401540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.041 [2024-11-16 18:55:03.413475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:20.041 [2024-11-16 18:55:03.415216] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:20.041 [2024-11-16 18:55:03.415269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:20.041 [2024-11-16 18:55:03.415278] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:20.041 [2024-11-16 18:55:03.415287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.041 "name": "Existed_Raid", 00:14:20.041 "uuid": "28e6c4d0-ebe6-458f-a1c2-5f6fbe5e53ec", 00:14:20.041 "strip_size_kb": 64, 00:14:20.041 "state": "configuring", 00:14:20.041 "raid_level": "raid5f", 00:14:20.041 "superblock": true, 00:14:20.041 "num_base_bdevs": 3, 00:14:20.041 "num_base_bdevs_discovered": 1, 00:14:20.041 "num_base_bdevs_operational": 3, 00:14:20.041 "base_bdevs_list": [ 00:14:20.041 { 00:14:20.041 "name": "BaseBdev1", 00:14:20.041 "uuid": "9058fdf1-e7f6-4a03-9732-7393063ac74d", 00:14:20.041 "is_configured": true, 00:14:20.041 "data_offset": 2048, 00:14:20.041 "data_size": 63488 00:14:20.041 }, 00:14:20.041 { 00:14:20.041 "name": "BaseBdev2", 00:14:20.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.041 "is_configured": false, 00:14:20.041 "data_offset": 0, 00:14:20.041 "data_size": 0 00:14:20.041 }, 00:14:20.041 { 00:14:20.041 "name": "BaseBdev3", 00:14:20.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.041 "is_configured": false, 00:14:20.041 "data_offset": 0, 00:14:20.041 "data_size": 0 00:14:20.041 } 00:14:20.041 ] 00:14:20.041 }' 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.041 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.611 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.611 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.611 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.611 [2024-11-16 18:55:03.847966] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:20.611 BaseBdev2 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.612 [ 00:14:20.612 { 00:14:20.612 "name": "BaseBdev2", 00:14:20.612 "aliases": [ 00:14:20.612 "c560eb16-830c-46d6-8817-0f0a7052080a" 00:14:20.612 ], 00:14:20.612 "product_name": "Malloc disk", 00:14:20.612 "block_size": 512, 00:14:20.612 "num_blocks": 65536, 00:14:20.612 "uuid": "c560eb16-830c-46d6-8817-0f0a7052080a", 00:14:20.612 "assigned_rate_limits": { 00:14:20.612 "rw_ios_per_sec": 0, 00:14:20.612 "rw_mbytes_per_sec": 0, 00:14:20.612 "r_mbytes_per_sec": 0, 00:14:20.612 "w_mbytes_per_sec": 0 00:14:20.612 }, 00:14:20.612 "claimed": true, 00:14:20.612 "claim_type": "exclusive_write", 00:14:20.612 "zoned": false, 00:14:20.612 "supported_io_types": { 00:14:20.612 "read": true, 00:14:20.612 "write": true, 00:14:20.612 "unmap": true, 00:14:20.612 "flush": true, 00:14:20.612 "reset": true, 00:14:20.612 "nvme_admin": false, 00:14:20.612 "nvme_io": false, 00:14:20.612 "nvme_io_md": false, 00:14:20.612 "write_zeroes": true, 00:14:20.612 "zcopy": true, 00:14:20.612 "get_zone_info": false, 00:14:20.612 "zone_management": false, 00:14:20.612 "zone_append": false, 00:14:20.612 "compare": false, 00:14:20.612 "compare_and_write": false, 00:14:20.612 "abort": true, 00:14:20.612 "seek_hole": false, 00:14:20.612 "seek_data": false, 00:14:20.612 "copy": true, 00:14:20.612 "nvme_iov_md": false 00:14:20.612 }, 00:14:20.612 "memory_domains": [ 00:14:20.612 { 00:14:20.612 "dma_device_id": "system", 00:14:20.612 "dma_device_type": 1 00:14:20.612 }, 00:14:20.612 { 00:14:20.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.612 "dma_device_type": 2 00:14:20.612 } 00:14:20.612 ], 00:14:20.612 "driver_specific": {} 00:14:20.612 } 00:14:20.612 ] 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:20.612 "name": "Existed_Raid", 00:14:20.612 "uuid": "28e6c4d0-ebe6-458f-a1c2-5f6fbe5e53ec", 00:14:20.612 "strip_size_kb": 64, 00:14:20.612 "state": "configuring", 00:14:20.612 "raid_level": "raid5f", 00:14:20.612 "superblock": true, 00:14:20.612 "num_base_bdevs": 3, 00:14:20.612 "num_base_bdevs_discovered": 2, 00:14:20.612 "num_base_bdevs_operational": 3, 00:14:20.612 "base_bdevs_list": [ 00:14:20.612 { 00:14:20.612 "name": "BaseBdev1", 00:14:20.612 "uuid": "9058fdf1-e7f6-4a03-9732-7393063ac74d", 00:14:20.612 "is_configured": true, 00:14:20.612 "data_offset": 2048, 00:14:20.612 "data_size": 63488 00:14:20.612 }, 00:14:20.612 { 00:14:20.612 "name": "BaseBdev2", 00:14:20.612 "uuid": "c560eb16-830c-46d6-8817-0f0a7052080a", 00:14:20.612 "is_configured": true, 00:14:20.612 "data_offset": 2048, 00:14:20.612 "data_size": 63488 00:14:20.612 }, 00:14:20.612 { 00:14:20.612 "name": "BaseBdev3", 00:14:20.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:20.612 "is_configured": false, 00:14:20.612 "data_offset": 0, 00:14:20.612 "data_size": 0 00:14:20.612 } 00:14:20.612 ] 00:14:20.612 }' 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:20.612 18:55:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.876 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:20.876 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.876 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.137 [2024-11-16 18:55:04.361102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.137 [2024-11-16 18:55:04.361446] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:21.137 [2024-11-16 18:55:04.361473] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:21.137 [2024-11-16 18:55:04.361753] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:21.137 BaseBdev3 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.137 [2024-11-16 18:55:04.367200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:21.137 [2024-11-16 18:55:04.367257] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:21.137 [2024-11-16 18:55:04.367494] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.137 [ 00:14:21.137 { 00:14:21.137 "name": "BaseBdev3", 00:14:21.137 "aliases": [ 00:14:21.137 "a93b5bde-545d-435e-b97f-8c8eefaa72c9" 00:14:21.137 ], 00:14:21.137 "product_name": "Malloc disk", 00:14:21.137 "block_size": 512, 00:14:21.137 "num_blocks": 65536, 00:14:21.137 "uuid": "a93b5bde-545d-435e-b97f-8c8eefaa72c9", 00:14:21.137 "assigned_rate_limits": { 00:14:21.137 "rw_ios_per_sec": 0, 00:14:21.137 "rw_mbytes_per_sec": 0, 00:14:21.137 "r_mbytes_per_sec": 0, 00:14:21.137 "w_mbytes_per_sec": 0 00:14:21.137 }, 00:14:21.137 "claimed": true, 00:14:21.137 "claim_type": "exclusive_write", 00:14:21.137 "zoned": false, 00:14:21.137 "supported_io_types": { 00:14:21.137 "read": true, 00:14:21.137 "write": true, 00:14:21.137 "unmap": true, 00:14:21.137 "flush": true, 00:14:21.137 "reset": true, 00:14:21.137 "nvme_admin": false, 00:14:21.137 "nvme_io": false, 00:14:21.137 "nvme_io_md": false, 00:14:21.137 "write_zeroes": true, 00:14:21.137 "zcopy": true, 00:14:21.137 "get_zone_info": false, 00:14:21.137 "zone_management": false, 00:14:21.137 "zone_append": false, 00:14:21.137 "compare": false, 00:14:21.137 "compare_and_write": false, 00:14:21.137 "abort": true, 00:14:21.137 "seek_hole": false, 00:14:21.137 "seek_data": false, 00:14:21.137 "copy": true, 00:14:21.137 "nvme_iov_md": false 00:14:21.137 }, 00:14:21.137 "memory_domains": [ 00:14:21.137 { 00:14:21.137 "dma_device_id": "system", 00:14:21.137 "dma_device_type": 1 00:14:21.137 }, 00:14:21.137 { 00:14:21.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.137 "dma_device_type": 2 00:14:21.137 } 00:14:21.137 ], 00:14:21.137 "driver_specific": {} 00:14:21.137 } 00:14:21.137 ] 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.137 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.137 "name": "Existed_Raid", 00:14:21.137 "uuid": "28e6c4d0-ebe6-458f-a1c2-5f6fbe5e53ec", 00:14:21.137 "strip_size_kb": 64, 00:14:21.137 "state": "online", 00:14:21.137 "raid_level": "raid5f", 00:14:21.137 "superblock": true, 00:14:21.137 "num_base_bdevs": 3, 00:14:21.137 "num_base_bdevs_discovered": 3, 00:14:21.137 "num_base_bdevs_operational": 3, 00:14:21.137 "base_bdevs_list": [ 00:14:21.137 { 00:14:21.137 "name": "BaseBdev1", 00:14:21.137 "uuid": "9058fdf1-e7f6-4a03-9732-7393063ac74d", 00:14:21.137 "is_configured": true, 00:14:21.137 "data_offset": 2048, 00:14:21.137 "data_size": 63488 00:14:21.137 }, 00:14:21.137 { 00:14:21.137 "name": "BaseBdev2", 00:14:21.137 "uuid": "c560eb16-830c-46d6-8817-0f0a7052080a", 00:14:21.137 "is_configured": true, 00:14:21.138 "data_offset": 2048, 00:14:21.138 "data_size": 63488 00:14:21.138 }, 00:14:21.138 { 00:14:21.138 "name": "BaseBdev3", 00:14:21.138 "uuid": "a93b5bde-545d-435e-b97f-8c8eefaa72c9", 00:14:21.138 "is_configured": true, 00:14:21.138 "data_offset": 2048, 00:14:21.138 "data_size": 63488 00:14:21.138 } 00:14:21.138 ] 00:14:21.138 }' 00:14:21.138 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.138 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.397 [2024-11-16 18:55:04.840733] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:21.397 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.657 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:21.657 "name": "Existed_Raid", 00:14:21.657 "aliases": [ 00:14:21.657 "28e6c4d0-ebe6-458f-a1c2-5f6fbe5e53ec" 00:14:21.657 ], 00:14:21.657 "product_name": "Raid Volume", 00:14:21.657 "block_size": 512, 00:14:21.657 "num_blocks": 126976, 00:14:21.657 "uuid": "28e6c4d0-ebe6-458f-a1c2-5f6fbe5e53ec", 00:14:21.657 "assigned_rate_limits": { 00:14:21.657 "rw_ios_per_sec": 0, 00:14:21.657 "rw_mbytes_per_sec": 0, 00:14:21.657 "r_mbytes_per_sec": 0, 00:14:21.657 "w_mbytes_per_sec": 0 00:14:21.657 }, 00:14:21.657 "claimed": false, 00:14:21.657 "zoned": false, 00:14:21.657 "supported_io_types": { 00:14:21.657 "read": true, 00:14:21.657 "write": true, 00:14:21.657 "unmap": false, 00:14:21.657 "flush": false, 00:14:21.657 "reset": true, 00:14:21.657 "nvme_admin": false, 00:14:21.657 "nvme_io": false, 00:14:21.657 "nvme_io_md": false, 00:14:21.657 "write_zeroes": true, 00:14:21.657 "zcopy": false, 00:14:21.657 "get_zone_info": false, 00:14:21.657 "zone_management": false, 00:14:21.657 "zone_append": false, 00:14:21.657 "compare": false, 00:14:21.657 "compare_and_write": false, 00:14:21.657 "abort": false, 00:14:21.657 "seek_hole": false, 00:14:21.657 "seek_data": false, 00:14:21.657 "copy": false, 00:14:21.657 "nvme_iov_md": false 00:14:21.657 }, 00:14:21.657 "driver_specific": { 00:14:21.658 "raid": { 00:14:21.658 "uuid": "28e6c4d0-ebe6-458f-a1c2-5f6fbe5e53ec", 00:14:21.658 "strip_size_kb": 64, 00:14:21.658 "state": "online", 00:14:21.658 "raid_level": "raid5f", 00:14:21.658 "superblock": true, 00:14:21.658 "num_base_bdevs": 3, 00:14:21.658 "num_base_bdevs_discovered": 3, 00:14:21.658 "num_base_bdevs_operational": 3, 00:14:21.658 "base_bdevs_list": [ 00:14:21.658 { 00:14:21.658 "name": "BaseBdev1", 00:14:21.658 "uuid": "9058fdf1-e7f6-4a03-9732-7393063ac74d", 00:14:21.658 "is_configured": true, 00:14:21.658 "data_offset": 2048, 00:14:21.658 "data_size": 63488 00:14:21.658 }, 00:14:21.658 { 00:14:21.658 "name": "BaseBdev2", 00:14:21.658 "uuid": "c560eb16-830c-46d6-8817-0f0a7052080a", 00:14:21.658 "is_configured": true, 00:14:21.658 "data_offset": 2048, 00:14:21.658 "data_size": 63488 00:14:21.658 }, 00:14:21.658 { 00:14:21.658 "name": "BaseBdev3", 00:14:21.658 "uuid": "a93b5bde-545d-435e-b97f-8c8eefaa72c9", 00:14:21.658 "is_configured": true, 00:14:21.658 "data_offset": 2048, 00:14:21.658 "data_size": 63488 00:14:21.658 } 00:14:21.658 ] 00:14:21.658 } 00:14:21.658 } 00:14:21.658 }' 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:21.658 BaseBdev2 00:14:21.658 BaseBdev3' 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.658 18:55:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.658 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.658 [2024-11-16 18:55:05.108089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:21.917 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:21.918 "name": "Existed_Raid", 00:14:21.918 "uuid": "28e6c4d0-ebe6-458f-a1c2-5f6fbe5e53ec", 00:14:21.918 "strip_size_kb": 64, 00:14:21.918 "state": "online", 00:14:21.918 "raid_level": "raid5f", 00:14:21.918 "superblock": true, 00:14:21.918 "num_base_bdevs": 3, 00:14:21.918 "num_base_bdevs_discovered": 2, 00:14:21.918 "num_base_bdevs_operational": 2, 00:14:21.918 "base_bdevs_list": [ 00:14:21.918 { 00:14:21.918 "name": null, 00:14:21.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.918 "is_configured": false, 00:14:21.918 "data_offset": 0, 00:14:21.918 "data_size": 63488 00:14:21.918 }, 00:14:21.918 { 00:14:21.918 "name": "BaseBdev2", 00:14:21.918 "uuid": "c560eb16-830c-46d6-8817-0f0a7052080a", 00:14:21.918 "is_configured": true, 00:14:21.918 "data_offset": 2048, 00:14:21.918 "data_size": 63488 00:14:21.918 }, 00:14:21.918 { 00:14:21.918 "name": "BaseBdev3", 00:14:21.918 "uuid": "a93b5bde-545d-435e-b97f-8c8eefaa72c9", 00:14:21.918 "is_configured": true, 00:14:21.918 "data_offset": 2048, 00:14:21.918 "data_size": 63488 00:14:21.918 } 00:14:21.918 ] 00:14:21.918 }' 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:21.918 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.177 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:22.177 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.177 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.177 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.177 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.177 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.436 [2024-11-16 18:55:05.693621] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.436 [2024-11-16 18:55:05.693777] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:22.436 [2024-11-16 18:55:05.782855] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.436 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.436 [2024-11-16 18:55:05.842773] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:22.436 [2024-11-16 18:55:05.842881] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.697 18:55:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.697 BaseBdev2 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.697 [ 00:14:22.697 { 00:14:22.697 "name": "BaseBdev2", 00:14:22.697 "aliases": [ 00:14:22.697 "ebe74a22-73bf-4809-9375-ec26512920a9" 00:14:22.697 ], 00:14:22.697 "product_name": "Malloc disk", 00:14:22.697 "block_size": 512, 00:14:22.697 "num_blocks": 65536, 00:14:22.697 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:22.697 "assigned_rate_limits": { 00:14:22.697 "rw_ios_per_sec": 0, 00:14:22.697 "rw_mbytes_per_sec": 0, 00:14:22.697 "r_mbytes_per_sec": 0, 00:14:22.697 "w_mbytes_per_sec": 0 00:14:22.697 }, 00:14:22.697 "claimed": false, 00:14:22.697 "zoned": false, 00:14:22.697 "supported_io_types": { 00:14:22.697 "read": true, 00:14:22.697 "write": true, 00:14:22.697 "unmap": true, 00:14:22.697 "flush": true, 00:14:22.697 "reset": true, 00:14:22.697 "nvme_admin": false, 00:14:22.697 "nvme_io": false, 00:14:22.697 "nvme_io_md": false, 00:14:22.697 "write_zeroes": true, 00:14:22.697 "zcopy": true, 00:14:22.697 "get_zone_info": false, 00:14:22.697 "zone_management": false, 00:14:22.697 "zone_append": false, 00:14:22.697 "compare": false, 00:14:22.697 "compare_and_write": false, 00:14:22.697 "abort": true, 00:14:22.697 "seek_hole": false, 00:14:22.697 "seek_data": false, 00:14:22.697 "copy": true, 00:14:22.697 "nvme_iov_md": false 00:14:22.697 }, 00:14:22.697 "memory_domains": [ 00:14:22.697 { 00:14:22.697 "dma_device_id": "system", 00:14:22.697 "dma_device_type": 1 00:14:22.697 }, 00:14:22.697 { 00:14:22.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.697 "dma_device_type": 2 00:14:22.697 } 00:14:22.697 ], 00:14:22.697 "driver_specific": {} 00:14:22.697 } 00:14:22.697 ] 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.697 BaseBdev3 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.697 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.697 [ 00:14:22.697 { 00:14:22.697 "name": "BaseBdev3", 00:14:22.697 "aliases": [ 00:14:22.697 "ba859aa3-a249-4bca-b09e-d417bc069c98" 00:14:22.697 ], 00:14:22.697 "product_name": "Malloc disk", 00:14:22.697 "block_size": 512, 00:14:22.697 "num_blocks": 65536, 00:14:22.697 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:22.697 "assigned_rate_limits": { 00:14:22.698 "rw_ios_per_sec": 0, 00:14:22.698 "rw_mbytes_per_sec": 0, 00:14:22.698 "r_mbytes_per_sec": 0, 00:14:22.698 "w_mbytes_per_sec": 0 00:14:22.698 }, 00:14:22.698 "claimed": false, 00:14:22.698 "zoned": false, 00:14:22.698 "supported_io_types": { 00:14:22.698 "read": true, 00:14:22.698 "write": true, 00:14:22.698 "unmap": true, 00:14:22.698 "flush": true, 00:14:22.698 "reset": true, 00:14:22.698 "nvme_admin": false, 00:14:22.698 "nvme_io": false, 00:14:22.698 "nvme_io_md": false, 00:14:22.698 "write_zeroes": true, 00:14:22.698 "zcopy": true, 00:14:22.698 "get_zone_info": false, 00:14:22.698 "zone_management": false, 00:14:22.698 "zone_append": false, 00:14:22.698 "compare": false, 00:14:22.698 "compare_and_write": false, 00:14:22.698 "abort": true, 00:14:22.698 "seek_hole": false, 00:14:22.698 "seek_data": false, 00:14:22.698 "copy": true, 00:14:22.698 "nvme_iov_md": false 00:14:22.698 }, 00:14:22.698 "memory_domains": [ 00:14:22.698 { 00:14:22.698 "dma_device_id": "system", 00:14:22.698 "dma_device_type": 1 00:14:22.698 }, 00:14:22.698 { 00:14:22.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.698 "dma_device_type": 2 00:14:22.698 } 00:14:22.698 ], 00:14:22.698 "driver_specific": {} 00:14:22.698 } 00:14:22.698 ] 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.698 [2024-11-16 18:55:06.146473] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.698 [2024-11-16 18:55:06.146570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.698 [2024-11-16 18:55:06.146613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.698 [2024-11-16 18:55:06.148401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.698 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:22.958 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.958 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.958 "name": "Existed_Raid", 00:14:22.958 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:22.958 "strip_size_kb": 64, 00:14:22.958 "state": "configuring", 00:14:22.958 "raid_level": "raid5f", 00:14:22.958 "superblock": true, 00:14:22.958 "num_base_bdevs": 3, 00:14:22.958 "num_base_bdevs_discovered": 2, 00:14:22.958 "num_base_bdevs_operational": 3, 00:14:22.958 "base_bdevs_list": [ 00:14:22.958 { 00:14:22.958 "name": "BaseBdev1", 00:14:22.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.958 "is_configured": false, 00:14:22.958 "data_offset": 0, 00:14:22.958 "data_size": 0 00:14:22.958 }, 00:14:22.958 { 00:14:22.958 "name": "BaseBdev2", 00:14:22.958 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:22.958 "is_configured": true, 00:14:22.958 "data_offset": 2048, 00:14:22.958 "data_size": 63488 00:14:22.958 }, 00:14:22.958 { 00:14:22.958 "name": "BaseBdev3", 00:14:22.958 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:22.958 "is_configured": true, 00:14:22.958 "data_offset": 2048, 00:14:22.958 "data_size": 63488 00:14:22.958 } 00:14:22.958 ] 00:14:22.958 }' 00:14:22.958 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.958 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.218 [2024-11-16 18:55:06.589693] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.218 "name": "Existed_Raid", 00:14:23.218 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:23.218 "strip_size_kb": 64, 00:14:23.218 "state": "configuring", 00:14:23.218 "raid_level": "raid5f", 00:14:23.218 "superblock": true, 00:14:23.218 "num_base_bdevs": 3, 00:14:23.218 "num_base_bdevs_discovered": 1, 00:14:23.218 "num_base_bdevs_operational": 3, 00:14:23.218 "base_bdevs_list": [ 00:14:23.218 { 00:14:23.218 "name": "BaseBdev1", 00:14:23.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.218 "is_configured": false, 00:14:23.218 "data_offset": 0, 00:14:23.218 "data_size": 0 00:14:23.218 }, 00:14:23.218 { 00:14:23.218 "name": null, 00:14:23.218 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:23.218 "is_configured": false, 00:14:23.218 "data_offset": 0, 00:14:23.218 "data_size": 63488 00:14:23.218 }, 00:14:23.218 { 00:14:23.218 "name": "BaseBdev3", 00:14:23.218 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:23.218 "is_configured": true, 00:14:23.218 "data_offset": 2048, 00:14:23.218 "data_size": 63488 00:14:23.218 } 00:14:23.218 ] 00:14:23.218 }' 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.218 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.478 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.478 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.478 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.478 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.738 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.738 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:14:23.738 18:55:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.738 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.738 18:55:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.738 [2024-11-16 18:55:07.009199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.738 BaseBdev1 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.738 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.738 [ 00:14:23.738 { 00:14:23.738 "name": "BaseBdev1", 00:14:23.738 "aliases": [ 00:14:23.738 "e331ea84-5070-439b-8801-e19cee94e93d" 00:14:23.738 ], 00:14:23.738 "product_name": "Malloc disk", 00:14:23.738 "block_size": 512, 00:14:23.738 "num_blocks": 65536, 00:14:23.738 "uuid": "e331ea84-5070-439b-8801-e19cee94e93d", 00:14:23.738 "assigned_rate_limits": { 00:14:23.738 "rw_ios_per_sec": 0, 00:14:23.738 "rw_mbytes_per_sec": 0, 00:14:23.738 "r_mbytes_per_sec": 0, 00:14:23.738 "w_mbytes_per_sec": 0 00:14:23.738 }, 00:14:23.738 "claimed": true, 00:14:23.738 "claim_type": "exclusive_write", 00:14:23.738 "zoned": false, 00:14:23.738 "supported_io_types": { 00:14:23.738 "read": true, 00:14:23.738 "write": true, 00:14:23.738 "unmap": true, 00:14:23.738 "flush": true, 00:14:23.738 "reset": true, 00:14:23.738 "nvme_admin": false, 00:14:23.738 "nvme_io": false, 00:14:23.738 "nvme_io_md": false, 00:14:23.738 "write_zeroes": true, 00:14:23.738 "zcopy": true, 00:14:23.738 "get_zone_info": false, 00:14:23.738 "zone_management": false, 00:14:23.738 "zone_append": false, 00:14:23.738 "compare": false, 00:14:23.738 "compare_and_write": false, 00:14:23.738 "abort": true, 00:14:23.738 "seek_hole": false, 00:14:23.738 "seek_data": false, 00:14:23.739 "copy": true, 00:14:23.739 "nvme_iov_md": false 00:14:23.739 }, 00:14:23.739 "memory_domains": [ 00:14:23.739 { 00:14:23.739 "dma_device_id": "system", 00:14:23.739 "dma_device_type": 1 00:14:23.739 }, 00:14:23.739 { 00:14:23.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.739 "dma_device_type": 2 00:14:23.739 } 00:14:23.739 ], 00:14:23.739 "driver_specific": {} 00:14:23.739 } 00:14:23.739 ] 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:23.739 "name": "Existed_Raid", 00:14:23.739 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:23.739 "strip_size_kb": 64, 00:14:23.739 "state": "configuring", 00:14:23.739 "raid_level": "raid5f", 00:14:23.739 "superblock": true, 00:14:23.739 "num_base_bdevs": 3, 00:14:23.739 "num_base_bdevs_discovered": 2, 00:14:23.739 "num_base_bdevs_operational": 3, 00:14:23.739 "base_bdevs_list": [ 00:14:23.739 { 00:14:23.739 "name": "BaseBdev1", 00:14:23.739 "uuid": "e331ea84-5070-439b-8801-e19cee94e93d", 00:14:23.739 "is_configured": true, 00:14:23.739 "data_offset": 2048, 00:14:23.739 "data_size": 63488 00:14:23.739 }, 00:14:23.739 { 00:14:23.739 "name": null, 00:14:23.739 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:23.739 "is_configured": false, 00:14:23.739 "data_offset": 0, 00:14:23.739 "data_size": 63488 00:14:23.739 }, 00:14:23.739 { 00:14:23.739 "name": "BaseBdev3", 00:14:23.739 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:23.739 "is_configured": true, 00:14:23.739 "data_offset": 2048, 00:14:23.739 "data_size": 63488 00:14:23.739 } 00:14:23.739 ] 00:14:23.739 }' 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:23.739 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.309 [2024-11-16 18:55:07.564277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.309 "name": "Existed_Raid", 00:14:24.309 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:24.309 "strip_size_kb": 64, 00:14:24.309 "state": "configuring", 00:14:24.309 "raid_level": "raid5f", 00:14:24.309 "superblock": true, 00:14:24.309 "num_base_bdevs": 3, 00:14:24.309 "num_base_bdevs_discovered": 1, 00:14:24.309 "num_base_bdevs_operational": 3, 00:14:24.309 "base_bdevs_list": [ 00:14:24.309 { 00:14:24.309 "name": "BaseBdev1", 00:14:24.309 "uuid": "e331ea84-5070-439b-8801-e19cee94e93d", 00:14:24.309 "is_configured": true, 00:14:24.309 "data_offset": 2048, 00:14:24.309 "data_size": 63488 00:14:24.309 }, 00:14:24.309 { 00:14:24.309 "name": null, 00:14:24.309 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:24.309 "is_configured": false, 00:14:24.309 "data_offset": 0, 00:14:24.309 "data_size": 63488 00:14:24.309 }, 00:14:24.309 { 00:14:24.309 "name": null, 00:14:24.309 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:24.309 "is_configured": false, 00:14:24.309 "data_offset": 0, 00:14:24.309 "data_size": 63488 00:14:24.309 } 00:14:24.309 ] 00:14:24.309 }' 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.309 18:55:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.569 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.569 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:24.569 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.569 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.569 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.829 [2024-11-16 18:55:08.059483] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.829 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.830 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.830 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.830 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.830 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.830 "name": "Existed_Raid", 00:14:24.830 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:24.830 "strip_size_kb": 64, 00:14:24.830 "state": "configuring", 00:14:24.830 "raid_level": "raid5f", 00:14:24.830 "superblock": true, 00:14:24.830 "num_base_bdevs": 3, 00:14:24.830 "num_base_bdevs_discovered": 2, 00:14:24.830 "num_base_bdevs_operational": 3, 00:14:24.830 "base_bdevs_list": [ 00:14:24.830 { 00:14:24.830 "name": "BaseBdev1", 00:14:24.830 "uuid": "e331ea84-5070-439b-8801-e19cee94e93d", 00:14:24.830 "is_configured": true, 00:14:24.830 "data_offset": 2048, 00:14:24.830 "data_size": 63488 00:14:24.830 }, 00:14:24.830 { 00:14:24.830 "name": null, 00:14:24.830 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:24.830 "is_configured": false, 00:14:24.830 "data_offset": 0, 00:14:24.830 "data_size": 63488 00:14:24.830 }, 00:14:24.830 { 00:14:24.830 "name": "BaseBdev3", 00:14:24.830 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:24.830 "is_configured": true, 00:14:24.830 "data_offset": 2048, 00:14:24.830 "data_size": 63488 00:14:24.830 } 00:14:24.830 ] 00:14:24.830 }' 00:14:24.830 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.830 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.089 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.089 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.089 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.089 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.089 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.349 [2024-11-16 18:55:08.582609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.349 "name": "Existed_Raid", 00:14:25.349 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:25.349 "strip_size_kb": 64, 00:14:25.349 "state": "configuring", 00:14:25.349 "raid_level": "raid5f", 00:14:25.349 "superblock": true, 00:14:25.349 "num_base_bdevs": 3, 00:14:25.349 "num_base_bdevs_discovered": 1, 00:14:25.349 "num_base_bdevs_operational": 3, 00:14:25.349 "base_bdevs_list": [ 00:14:25.349 { 00:14:25.349 "name": null, 00:14:25.349 "uuid": "e331ea84-5070-439b-8801-e19cee94e93d", 00:14:25.349 "is_configured": false, 00:14:25.349 "data_offset": 0, 00:14:25.349 "data_size": 63488 00:14:25.349 }, 00:14:25.349 { 00:14:25.349 "name": null, 00:14:25.349 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:25.349 "is_configured": false, 00:14:25.349 "data_offset": 0, 00:14:25.349 "data_size": 63488 00:14:25.349 }, 00:14:25.349 { 00:14:25.349 "name": "BaseBdev3", 00:14:25.349 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:25.349 "is_configured": true, 00:14:25.349 "data_offset": 2048, 00:14:25.349 "data_size": 63488 00:14:25.349 } 00:14:25.349 ] 00:14:25.349 }' 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.349 18:55:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.917 [2024-11-16 18:55:09.148573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.917 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:25.918 "name": "Existed_Raid", 00:14:25.918 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:25.918 "strip_size_kb": 64, 00:14:25.918 "state": "configuring", 00:14:25.918 "raid_level": "raid5f", 00:14:25.918 "superblock": true, 00:14:25.918 "num_base_bdevs": 3, 00:14:25.918 "num_base_bdevs_discovered": 2, 00:14:25.918 "num_base_bdevs_operational": 3, 00:14:25.918 "base_bdevs_list": [ 00:14:25.918 { 00:14:25.918 "name": null, 00:14:25.918 "uuid": "e331ea84-5070-439b-8801-e19cee94e93d", 00:14:25.918 "is_configured": false, 00:14:25.918 "data_offset": 0, 00:14:25.918 "data_size": 63488 00:14:25.918 }, 00:14:25.918 { 00:14:25.918 "name": "BaseBdev2", 00:14:25.918 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:25.918 "is_configured": true, 00:14:25.918 "data_offset": 2048, 00:14:25.918 "data_size": 63488 00:14:25.918 }, 00:14:25.918 { 00:14:25.918 "name": "BaseBdev3", 00:14:25.918 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:25.918 "is_configured": true, 00:14:25.918 "data_offset": 2048, 00:14:25.918 "data_size": 63488 00:14:25.918 } 00:14:25.918 ] 00:14:25.918 }' 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:25.918 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e331ea84-5070-439b-8801-e19cee94e93d 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.176 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.435 [2024-11-16 18:55:09.666908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:26.435 [2024-11-16 18:55:09.667213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:26.435 [2024-11-16 18:55:09.667234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:26.435 [2024-11-16 18:55:09.667485] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:26.435 NewBaseBdev 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.435 [2024-11-16 18:55:09.672513] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:26.435 [2024-11-16 18:55:09.672574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:14:26.435 [2024-11-16 18:55:09.672784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.435 [ 00:14:26.435 { 00:14:26.435 "name": "NewBaseBdev", 00:14:26.435 "aliases": [ 00:14:26.435 "e331ea84-5070-439b-8801-e19cee94e93d" 00:14:26.435 ], 00:14:26.435 "product_name": "Malloc disk", 00:14:26.435 "block_size": 512, 00:14:26.435 "num_blocks": 65536, 00:14:26.435 "uuid": "e331ea84-5070-439b-8801-e19cee94e93d", 00:14:26.435 "assigned_rate_limits": { 00:14:26.435 "rw_ios_per_sec": 0, 00:14:26.435 "rw_mbytes_per_sec": 0, 00:14:26.435 "r_mbytes_per_sec": 0, 00:14:26.435 "w_mbytes_per_sec": 0 00:14:26.435 }, 00:14:26.435 "claimed": true, 00:14:26.435 "claim_type": "exclusive_write", 00:14:26.435 "zoned": false, 00:14:26.435 "supported_io_types": { 00:14:26.435 "read": true, 00:14:26.435 "write": true, 00:14:26.435 "unmap": true, 00:14:26.435 "flush": true, 00:14:26.435 "reset": true, 00:14:26.435 "nvme_admin": false, 00:14:26.435 "nvme_io": false, 00:14:26.435 "nvme_io_md": false, 00:14:26.435 "write_zeroes": true, 00:14:26.435 "zcopy": true, 00:14:26.435 "get_zone_info": false, 00:14:26.435 "zone_management": false, 00:14:26.435 "zone_append": false, 00:14:26.435 "compare": false, 00:14:26.435 "compare_and_write": false, 00:14:26.435 "abort": true, 00:14:26.435 "seek_hole": false, 00:14:26.435 "seek_data": false, 00:14:26.435 "copy": true, 00:14:26.435 "nvme_iov_md": false 00:14:26.435 }, 00:14:26.435 "memory_domains": [ 00:14:26.435 { 00:14:26.435 "dma_device_id": "system", 00:14:26.435 "dma_device_type": 1 00:14:26.435 }, 00:14:26.435 { 00:14:26.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:26.435 "dma_device_type": 2 00:14:26.435 } 00:14:26.435 ], 00:14:26.435 "driver_specific": {} 00:14:26.435 } 00:14:26.435 ] 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.435 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.435 "name": "Existed_Raid", 00:14:26.435 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:26.435 "strip_size_kb": 64, 00:14:26.435 "state": "online", 00:14:26.435 "raid_level": "raid5f", 00:14:26.435 "superblock": true, 00:14:26.435 "num_base_bdevs": 3, 00:14:26.435 "num_base_bdevs_discovered": 3, 00:14:26.435 "num_base_bdevs_operational": 3, 00:14:26.435 "base_bdevs_list": [ 00:14:26.435 { 00:14:26.435 "name": "NewBaseBdev", 00:14:26.435 "uuid": "e331ea84-5070-439b-8801-e19cee94e93d", 00:14:26.435 "is_configured": true, 00:14:26.435 "data_offset": 2048, 00:14:26.435 "data_size": 63488 00:14:26.435 }, 00:14:26.435 { 00:14:26.435 "name": "BaseBdev2", 00:14:26.435 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:26.435 "is_configured": true, 00:14:26.435 "data_offset": 2048, 00:14:26.435 "data_size": 63488 00:14:26.435 }, 00:14:26.435 { 00:14:26.435 "name": "BaseBdev3", 00:14:26.436 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:26.436 "is_configured": true, 00:14:26.436 "data_offset": 2048, 00:14:26.436 "data_size": 63488 00:14:26.436 } 00:14:26.436 ] 00:14:26.436 }' 00:14:26.436 18:55:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.436 18:55:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.694 [2024-11-16 18:55:10.134365] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.694 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:26.694 "name": "Existed_Raid", 00:14:26.694 "aliases": [ 00:14:26.694 "2dddf699-c1cd-4a9a-872e-06d68dd63c08" 00:14:26.694 ], 00:14:26.694 "product_name": "Raid Volume", 00:14:26.694 "block_size": 512, 00:14:26.694 "num_blocks": 126976, 00:14:26.694 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:26.694 "assigned_rate_limits": { 00:14:26.694 "rw_ios_per_sec": 0, 00:14:26.694 "rw_mbytes_per_sec": 0, 00:14:26.694 "r_mbytes_per_sec": 0, 00:14:26.694 "w_mbytes_per_sec": 0 00:14:26.694 }, 00:14:26.694 "claimed": false, 00:14:26.694 "zoned": false, 00:14:26.694 "supported_io_types": { 00:14:26.694 "read": true, 00:14:26.694 "write": true, 00:14:26.694 "unmap": false, 00:14:26.694 "flush": false, 00:14:26.694 "reset": true, 00:14:26.694 "nvme_admin": false, 00:14:26.694 "nvme_io": false, 00:14:26.694 "nvme_io_md": false, 00:14:26.694 "write_zeroes": true, 00:14:26.694 "zcopy": false, 00:14:26.694 "get_zone_info": false, 00:14:26.694 "zone_management": false, 00:14:26.694 "zone_append": false, 00:14:26.694 "compare": false, 00:14:26.694 "compare_and_write": false, 00:14:26.694 "abort": false, 00:14:26.694 "seek_hole": false, 00:14:26.695 "seek_data": false, 00:14:26.695 "copy": false, 00:14:26.695 "nvme_iov_md": false 00:14:26.695 }, 00:14:26.695 "driver_specific": { 00:14:26.695 "raid": { 00:14:26.695 "uuid": "2dddf699-c1cd-4a9a-872e-06d68dd63c08", 00:14:26.695 "strip_size_kb": 64, 00:14:26.695 "state": "online", 00:14:26.695 "raid_level": "raid5f", 00:14:26.695 "superblock": true, 00:14:26.695 "num_base_bdevs": 3, 00:14:26.695 "num_base_bdevs_discovered": 3, 00:14:26.695 "num_base_bdevs_operational": 3, 00:14:26.695 "base_bdevs_list": [ 00:14:26.695 { 00:14:26.695 "name": "NewBaseBdev", 00:14:26.695 "uuid": "e331ea84-5070-439b-8801-e19cee94e93d", 00:14:26.695 "is_configured": true, 00:14:26.695 "data_offset": 2048, 00:14:26.695 "data_size": 63488 00:14:26.695 }, 00:14:26.695 { 00:14:26.695 "name": "BaseBdev2", 00:14:26.695 "uuid": "ebe74a22-73bf-4809-9375-ec26512920a9", 00:14:26.695 "is_configured": true, 00:14:26.695 "data_offset": 2048, 00:14:26.695 "data_size": 63488 00:14:26.695 }, 00:14:26.695 { 00:14:26.695 "name": "BaseBdev3", 00:14:26.695 "uuid": "ba859aa3-a249-4bca-b09e-d417bc069c98", 00:14:26.695 "is_configured": true, 00:14:26.695 "data_offset": 2048, 00:14:26.695 "data_size": 63488 00:14:26.695 } 00:14:26.695 ] 00:14:26.695 } 00:14:26.695 } 00:14:26.695 }' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:14:26.954 BaseBdev2 00:14:26.954 BaseBdev3' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:26.954 [2024-11-16 18:55:10.373785] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:26.954 [2024-11-16 18:55:10.373850] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:26.954 [2024-11-16 18:55:10.373925] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:26.954 [2024-11-16 18:55:10.374216] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:26.954 [2024-11-16 18:55:10.374230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80216 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80216 ']' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80216 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80216 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:26.954 killing process with pid 80216 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80216' 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80216 00:14:26.954 [2024-11-16 18:55:10.420613] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:26.954 18:55:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80216 00:14:27.521 [2024-11-16 18:55:10.702261] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:28.457 18:55:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:14:28.457 00:14:28.457 real 0m10.200s 00:14:28.457 user 0m16.305s 00:14:28.457 sys 0m1.764s 00:14:28.457 18:55:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:28.457 ************************************ 00:14:28.457 END TEST raid5f_state_function_test_sb 00:14:28.457 ************************************ 00:14:28.457 18:55:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:28.457 18:55:11 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:14:28.457 18:55:11 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:14:28.457 18:55:11 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:28.457 18:55:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:28.457 ************************************ 00:14:28.457 START TEST raid5f_superblock_test 00:14:28.457 ************************************ 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80832 00:14:28.457 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:14:28.458 18:55:11 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80832 00:14:28.458 18:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80832 ']' 00:14:28.458 18:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.458 18:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:28.458 18:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.458 18:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:28.458 18:55:11 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.458 [2024-11-16 18:55:11.905964] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:28.458 [2024-11-16 18:55:11.906165] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80832 ] 00:14:28.716 [2024-11-16 18:55:12.079157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.716 [2024-11-16 18:55:12.179792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.975 [2024-11-16 18:55:12.354888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:28.976 [2024-11-16 18:55:12.354941] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.545 malloc1 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.545 [2024-11-16 18:55:12.811180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:29.545 [2024-11-16 18:55:12.811305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.545 [2024-11-16 18:55:12.811347] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:29.545 [2024-11-16 18:55:12.811377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.545 [2024-11-16 18:55:12.813424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.545 [2024-11-16 18:55:12.813493] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:29.545 pt1 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:29.545 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.546 malloc2 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.546 [2024-11-16 18:55:12.868484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:29.546 [2024-11-16 18:55:12.868575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.546 [2024-11-16 18:55:12.868601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:29.546 [2024-11-16 18:55:12.868610] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.546 [2024-11-16 18:55:12.870608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.546 [2024-11-16 18:55:12.870666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:29.546 pt2 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.546 malloc3 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.546 [2024-11-16 18:55:12.954839] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:29.546 [2024-11-16 18:55:12.954937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:29.546 [2024-11-16 18:55:12.954975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:29.546 [2024-11-16 18:55:12.955003] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:29.546 [2024-11-16 18:55:12.957066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:29.546 [2024-11-16 18:55:12.957149] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:29.546 pt3 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.546 [2024-11-16 18:55:12.966868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:29.546 [2024-11-16 18:55:12.968633] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:29.546 [2024-11-16 18:55:12.968766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:29.546 [2024-11-16 18:55:12.968952] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:29.546 [2024-11-16 18:55:12.969001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:29.546 [2024-11-16 18:55:12.969236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:29.546 [2024-11-16 18:55:12.974843] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:29.546 [2024-11-16 18:55:12.974893] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:29.546 [2024-11-16 18:55:12.975113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.546 18:55:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.546 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.806 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:29.806 "name": "raid_bdev1", 00:14:29.806 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:29.806 "strip_size_kb": 64, 00:14:29.806 "state": "online", 00:14:29.806 "raid_level": "raid5f", 00:14:29.806 "superblock": true, 00:14:29.806 "num_base_bdevs": 3, 00:14:29.806 "num_base_bdevs_discovered": 3, 00:14:29.806 "num_base_bdevs_operational": 3, 00:14:29.806 "base_bdevs_list": [ 00:14:29.806 { 00:14:29.806 "name": "pt1", 00:14:29.806 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:29.806 "is_configured": true, 00:14:29.806 "data_offset": 2048, 00:14:29.806 "data_size": 63488 00:14:29.806 }, 00:14:29.806 { 00:14:29.806 "name": "pt2", 00:14:29.806 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:29.806 "is_configured": true, 00:14:29.806 "data_offset": 2048, 00:14:29.806 "data_size": 63488 00:14:29.806 }, 00:14:29.806 { 00:14:29.806 "name": "pt3", 00:14:29.806 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:29.806 "is_configured": true, 00:14:29.806 "data_offset": 2048, 00:14:29.806 "data_size": 63488 00:14:29.806 } 00:14:29.806 ] 00:14:29.806 }' 00:14:29.806 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:29.806 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.072 [2024-11-16 18:55:13.360589] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:30.072 "name": "raid_bdev1", 00:14:30.072 "aliases": [ 00:14:30.072 "2d5c49b0-92cc-468a-aa92-efc131c102b6" 00:14:30.072 ], 00:14:30.072 "product_name": "Raid Volume", 00:14:30.072 "block_size": 512, 00:14:30.072 "num_blocks": 126976, 00:14:30.072 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:30.072 "assigned_rate_limits": { 00:14:30.072 "rw_ios_per_sec": 0, 00:14:30.072 "rw_mbytes_per_sec": 0, 00:14:30.072 "r_mbytes_per_sec": 0, 00:14:30.072 "w_mbytes_per_sec": 0 00:14:30.072 }, 00:14:30.072 "claimed": false, 00:14:30.072 "zoned": false, 00:14:30.072 "supported_io_types": { 00:14:30.072 "read": true, 00:14:30.072 "write": true, 00:14:30.072 "unmap": false, 00:14:30.072 "flush": false, 00:14:30.072 "reset": true, 00:14:30.072 "nvme_admin": false, 00:14:30.072 "nvme_io": false, 00:14:30.072 "nvme_io_md": false, 00:14:30.072 "write_zeroes": true, 00:14:30.072 "zcopy": false, 00:14:30.072 "get_zone_info": false, 00:14:30.072 "zone_management": false, 00:14:30.072 "zone_append": false, 00:14:30.072 "compare": false, 00:14:30.072 "compare_and_write": false, 00:14:30.072 "abort": false, 00:14:30.072 "seek_hole": false, 00:14:30.072 "seek_data": false, 00:14:30.072 "copy": false, 00:14:30.072 "nvme_iov_md": false 00:14:30.072 }, 00:14:30.072 "driver_specific": { 00:14:30.072 "raid": { 00:14:30.072 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:30.072 "strip_size_kb": 64, 00:14:30.072 "state": "online", 00:14:30.072 "raid_level": "raid5f", 00:14:30.072 "superblock": true, 00:14:30.072 "num_base_bdevs": 3, 00:14:30.072 "num_base_bdevs_discovered": 3, 00:14:30.072 "num_base_bdevs_operational": 3, 00:14:30.072 "base_bdevs_list": [ 00:14:30.072 { 00:14:30.072 "name": "pt1", 00:14:30.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.072 "is_configured": true, 00:14:30.072 "data_offset": 2048, 00:14:30.072 "data_size": 63488 00:14:30.072 }, 00:14:30.072 { 00:14:30.072 "name": "pt2", 00:14:30.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.072 "is_configured": true, 00:14:30.072 "data_offset": 2048, 00:14:30.072 "data_size": 63488 00:14:30.072 }, 00:14:30.072 { 00:14:30.072 "name": "pt3", 00:14:30.072 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.072 "is_configured": true, 00:14:30.072 "data_offset": 2048, 00:14:30.072 "data_size": 63488 00:14:30.072 } 00:14:30.072 ] 00:14:30.072 } 00:14:30.072 } 00:14:30.072 }' 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:30.072 pt2 00:14:30.072 pt3' 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:30.072 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.073 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:14:30.353 [2024-11-16 18:55:13.588178] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2d5c49b0-92cc-468a-aa92-efc131c102b6 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2d5c49b0-92cc-468a-aa92-efc131c102b6 ']' 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.353 [2024-11-16 18:55:13.635939] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.353 [2024-11-16 18:55:13.635963] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.353 [2024-11-16 18:55:13.636022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.353 [2024-11-16 18:55:13.636086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.353 [2024-11-16 18:55:13.636095] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.353 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.353 [2024-11-16 18:55:13.767819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:14:30.353 [2024-11-16 18:55:13.769636] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:14:30.353 [2024-11-16 18:55:13.769737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:14:30.353 [2024-11-16 18:55:13.769805] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:14:30.353 [2024-11-16 18:55:13.769914] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:14:30.353 [2024-11-16 18:55:13.769970] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:14:30.354 [2024-11-16 18:55:13.770012] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.354 [2024-11-16 18:55:13.770022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:14:30.354 request: 00:14:30.354 { 00:14:30.354 "name": "raid_bdev1", 00:14:30.354 "raid_level": "raid5f", 00:14:30.354 "base_bdevs": [ 00:14:30.354 "malloc1", 00:14:30.354 "malloc2", 00:14:30.354 "malloc3" 00:14:30.354 ], 00:14:30.354 "strip_size_kb": 64, 00:14:30.354 "superblock": false, 00:14:30.354 "method": "bdev_raid_create", 00:14:30.354 "req_id": 1 00:14:30.354 } 00:14:30.354 Got JSON-RPC error response 00:14:30.354 response: 00:14:30.354 { 00:14:30.354 "code": -17, 00:14:30.354 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:14:30.354 } 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.354 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.627 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.628 [2024-11-16 18:55:13.831613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:30.628 [2024-11-16 18:55:13.831701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.628 [2024-11-16 18:55:13.831735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:30.628 [2024-11-16 18:55:13.831761] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.628 [2024-11-16 18:55:13.833865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.628 [2024-11-16 18:55:13.833933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:30.628 [2024-11-16 18:55:13.834025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:30.628 [2024-11-16 18:55:13.834116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:30.628 pt1 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.628 "name": "raid_bdev1", 00:14:30.628 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:30.628 "strip_size_kb": 64, 00:14:30.628 "state": "configuring", 00:14:30.628 "raid_level": "raid5f", 00:14:30.628 "superblock": true, 00:14:30.628 "num_base_bdevs": 3, 00:14:30.628 "num_base_bdevs_discovered": 1, 00:14:30.628 "num_base_bdevs_operational": 3, 00:14:30.628 "base_bdevs_list": [ 00:14:30.628 { 00:14:30.628 "name": "pt1", 00:14:30.628 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.628 "is_configured": true, 00:14:30.628 "data_offset": 2048, 00:14:30.628 "data_size": 63488 00:14:30.628 }, 00:14:30.628 { 00:14:30.628 "name": null, 00:14:30.628 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.628 "is_configured": false, 00:14:30.628 "data_offset": 2048, 00:14:30.628 "data_size": 63488 00:14:30.628 }, 00:14:30.628 { 00:14:30.628 "name": null, 00:14:30.628 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.628 "is_configured": false, 00:14:30.628 "data_offset": 2048, 00:14:30.628 "data_size": 63488 00:14:30.628 } 00:14:30.628 ] 00:14:30.628 }' 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.628 18:55:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.888 [2024-11-16 18:55:14.258873] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:30.888 [2024-11-16 18:55:14.258966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:30.888 [2024-11-16 18:55:14.258988] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:30.888 [2024-11-16 18:55:14.258997] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:30.888 [2024-11-16 18:55:14.259417] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:30.888 [2024-11-16 18:55:14.259441] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:30.888 [2024-11-16 18:55:14.259518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:30.888 [2024-11-16 18:55:14.259537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:30.888 pt2 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.888 [2024-11-16 18:55:14.270879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.888 "name": "raid_bdev1", 00:14:30.888 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:30.888 "strip_size_kb": 64, 00:14:30.888 "state": "configuring", 00:14:30.888 "raid_level": "raid5f", 00:14:30.888 "superblock": true, 00:14:30.888 "num_base_bdevs": 3, 00:14:30.888 "num_base_bdevs_discovered": 1, 00:14:30.888 "num_base_bdevs_operational": 3, 00:14:30.888 "base_bdevs_list": [ 00:14:30.888 { 00:14:30.888 "name": "pt1", 00:14:30.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:30.888 "is_configured": true, 00:14:30.888 "data_offset": 2048, 00:14:30.888 "data_size": 63488 00:14:30.888 }, 00:14:30.888 { 00:14:30.888 "name": null, 00:14:30.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:30.888 "is_configured": false, 00:14:30.888 "data_offset": 0, 00:14:30.888 "data_size": 63488 00:14:30.888 }, 00:14:30.888 { 00:14:30.888 "name": null, 00:14:30.888 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:30.888 "is_configured": false, 00:14:30.888 "data_offset": 2048, 00:14:30.888 "data_size": 63488 00:14:30.888 } 00:14:30.888 ] 00:14:30.888 }' 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.888 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.458 [2024-11-16 18:55:14.682158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:31.458 [2024-11-16 18:55:14.682253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.458 [2024-11-16 18:55:14.682285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:14:31.458 [2024-11-16 18:55:14.682314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.458 [2024-11-16 18:55:14.682753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.458 [2024-11-16 18:55:14.682817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:31.458 [2024-11-16 18:55:14.682920] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:31.458 [2024-11-16 18:55:14.682970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:31.458 pt2 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.458 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.458 [2024-11-16 18:55:14.690145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:31.458 [2024-11-16 18:55:14.690225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:31.458 [2024-11-16 18:55:14.690253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:31.458 [2024-11-16 18:55:14.690281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:31.458 [2024-11-16 18:55:14.690632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:31.458 [2024-11-16 18:55:14.690707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:31.458 [2024-11-16 18:55:14.690785] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:31.458 [2024-11-16 18:55:14.690830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:31.458 [2024-11-16 18:55:14.690970] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:31.458 [2024-11-16 18:55:14.691009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:31.458 [2024-11-16 18:55:14.691254] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:31.459 [2024-11-16 18:55:14.696467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:31.459 [2024-11-16 18:55:14.696522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:14:31.459 [2024-11-16 18:55:14.696727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.459 pt3 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:31.459 "name": "raid_bdev1", 00:14:31.459 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:31.459 "strip_size_kb": 64, 00:14:31.459 "state": "online", 00:14:31.459 "raid_level": "raid5f", 00:14:31.459 "superblock": true, 00:14:31.459 "num_base_bdevs": 3, 00:14:31.459 "num_base_bdevs_discovered": 3, 00:14:31.459 "num_base_bdevs_operational": 3, 00:14:31.459 "base_bdevs_list": [ 00:14:31.459 { 00:14:31.459 "name": "pt1", 00:14:31.459 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.459 "is_configured": true, 00:14:31.459 "data_offset": 2048, 00:14:31.459 "data_size": 63488 00:14:31.459 }, 00:14:31.459 { 00:14:31.459 "name": "pt2", 00:14:31.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.459 "is_configured": true, 00:14:31.459 "data_offset": 2048, 00:14:31.459 "data_size": 63488 00:14:31.459 }, 00:14:31.459 { 00:14:31.459 "name": "pt3", 00:14:31.459 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.459 "is_configured": true, 00:14:31.459 "data_offset": 2048, 00:14:31.459 "data_size": 63488 00:14:31.459 } 00:14:31.459 ] 00:14:31.459 }' 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:31.459 18:55:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:31.719 [2024-11-16 18:55:15.146546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:31.719 "name": "raid_bdev1", 00:14:31.719 "aliases": [ 00:14:31.719 "2d5c49b0-92cc-468a-aa92-efc131c102b6" 00:14:31.719 ], 00:14:31.719 "product_name": "Raid Volume", 00:14:31.719 "block_size": 512, 00:14:31.719 "num_blocks": 126976, 00:14:31.719 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:31.719 "assigned_rate_limits": { 00:14:31.719 "rw_ios_per_sec": 0, 00:14:31.719 "rw_mbytes_per_sec": 0, 00:14:31.719 "r_mbytes_per_sec": 0, 00:14:31.719 "w_mbytes_per_sec": 0 00:14:31.719 }, 00:14:31.719 "claimed": false, 00:14:31.719 "zoned": false, 00:14:31.719 "supported_io_types": { 00:14:31.719 "read": true, 00:14:31.719 "write": true, 00:14:31.719 "unmap": false, 00:14:31.719 "flush": false, 00:14:31.719 "reset": true, 00:14:31.719 "nvme_admin": false, 00:14:31.719 "nvme_io": false, 00:14:31.719 "nvme_io_md": false, 00:14:31.719 "write_zeroes": true, 00:14:31.719 "zcopy": false, 00:14:31.719 "get_zone_info": false, 00:14:31.719 "zone_management": false, 00:14:31.719 "zone_append": false, 00:14:31.719 "compare": false, 00:14:31.719 "compare_and_write": false, 00:14:31.719 "abort": false, 00:14:31.719 "seek_hole": false, 00:14:31.719 "seek_data": false, 00:14:31.719 "copy": false, 00:14:31.719 "nvme_iov_md": false 00:14:31.719 }, 00:14:31.719 "driver_specific": { 00:14:31.719 "raid": { 00:14:31.719 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:31.719 "strip_size_kb": 64, 00:14:31.719 "state": "online", 00:14:31.719 "raid_level": "raid5f", 00:14:31.719 "superblock": true, 00:14:31.719 "num_base_bdevs": 3, 00:14:31.719 "num_base_bdevs_discovered": 3, 00:14:31.719 "num_base_bdevs_operational": 3, 00:14:31.719 "base_bdevs_list": [ 00:14:31.719 { 00:14:31.719 "name": "pt1", 00:14:31.719 "uuid": "00000000-0000-0000-0000-000000000001", 00:14:31.719 "is_configured": true, 00:14:31.719 "data_offset": 2048, 00:14:31.719 "data_size": 63488 00:14:31.719 }, 00:14:31.719 { 00:14:31.719 "name": "pt2", 00:14:31.719 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:31.719 "is_configured": true, 00:14:31.719 "data_offset": 2048, 00:14:31.719 "data_size": 63488 00:14:31.719 }, 00:14:31.719 { 00:14:31.719 "name": "pt3", 00:14:31.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:31.719 "is_configured": true, 00:14:31.719 "data_offset": 2048, 00:14:31.719 "data_size": 63488 00:14:31.719 } 00:14:31.719 ] 00:14:31.719 } 00:14:31.719 } 00:14:31.719 }' 00:14:31.719 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:14:31.979 pt2 00:14:31.979 pt3' 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.979 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:14:31.980 [2024-11-16 18:55:15.386051] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2d5c49b0-92cc-468a-aa92-efc131c102b6 '!=' 2d5c49b0-92cc-468a-aa92-efc131c102b6 ']' 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:31.980 [2024-11-16 18:55:15.437862] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:31.980 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.239 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.240 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.240 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.240 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.240 "name": "raid_bdev1", 00:14:32.240 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:32.240 "strip_size_kb": 64, 00:14:32.240 "state": "online", 00:14:32.240 "raid_level": "raid5f", 00:14:32.240 "superblock": true, 00:14:32.240 "num_base_bdevs": 3, 00:14:32.240 "num_base_bdevs_discovered": 2, 00:14:32.240 "num_base_bdevs_operational": 2, 00:14:32.240 "base_bdevs_list": [ 00:14:32.240 { 00:14:32.240 "name": null, 00:14:32.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.240 "is_configured": false, 00:14:32.240 "data_offset": 0, 00:14:32.240 "data_size": 63488 00:14:32.240 }, 00:14:32.240 { 00:14:32.240 "name": "pt2", 00:14:32.240 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.240 "is_configured": true, 00:14:32.240 "data_offset": 2048, 00:14:32.240 "data_size": 63488 00:14:32.240 }, 00:14:32.240 { 00:14:32.240 "name": "pt3", 00:14:32.240 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.240 "is_configured": true, 00:14:32.240 "data_offset": 2048, 00:14:32.240 "data_size": 63488 00:14:32.240 } 00:14:32.240 ] 00:14:32.240 }' 00:14:32.240 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.240 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.500 [2024-11-16 18:55:15.873061] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.500 [2024-11-16 18:55:15.873127] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.500 [2024-11-16 18:55:15.873206] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.500 [2024-11-16 18:55:15.873287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.500 [2024-11-16 18:55:15.873322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.500 [2024-11-16 18:55:15.944922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:32.500 [2024-11-16 18:55:15.944971] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:32.500 [2024-11-16 18:55:15.944985] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:32.500 [2024-11-16 18:55:15.944995] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:32.500 [2024-11-16 18:55:15.947028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:32.500 [2024-11-16 18:55:15.947064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:32.500 [2024-11-16 18:55:15.947133] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:14:32.500 [2024-11-16 18:55:15.947183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:32.500 pt2 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.500 18:55:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.760 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.760 "name": "raid_bdev1", 00:14:32.760 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:32.760 "strip_size_kb": 64, 00:14:32.760 "state": "configuring", 00:14:32.760 "raid_level": "raid5f", 00:14:32.760 "superblock": true, 00:14:32.760 "num_base_bdevs": 3, 00:14:32.760 "num_base_bdevs_discovered": 1, 00:14:32.760 "num_base_bdevs_operational": 2, 00:14:32.760 "base_bdevs_list": [ 00:14:32.760 { 00:14:32.760 "name": null, 00:14:32.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.760 "is_configured": false, 00:14:32.760 "data_offset": 2048, 00:14:32.760 "data_size": 63488 00:14:32.760 }, 00:14:32.760 { 00:14:32.760 "name": "pt2", 00:14:32.760 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:32.760 "is_configured": true, 00:14:32.760 "data_offset": 2048, 00:14:32.760 "data_size": 63488 00:14:32.760 }, 00:14:32.760 { 00:14:32.760 "name": null, 00:14:32.760 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:32.760 "is_configured": false, 00:14:32.760 "data_offset": 2048, 00:14:32.760 "data_size": 63488 00:14:32.760 } 00:14:32.760 ] 00:14:32.760 }' 00:14:32.760 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.760 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.020 [2024-11-16 18:55:16.384186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:33.020 [2024-11-16 18:55:16.384288] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.020 [2024-11-16 18:55:16.384326] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:33.020 [2024-11-16 18:55:16.384356] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.020 [2024-11-16 18:55:16.384828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.020 [2024-11-16 18:55:16.384888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:33.020 [2024-11-16 18:55:16.384993] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:33.020 [2024-11-16 18:55:16.385062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:33.020 [2024-11-16 18:55:16.385212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:14:33.020 [2024-11-16 18:55:16.385250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:33.020 [2024-11-16 18:55:16.385500] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:33.020 [2024-11-16 18:55:16.390826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:14:33.020 [2024-11-16 18:55:16.390879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:14:33.020 [2024-11-16 18:55:16.391220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.020 pt3 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.020 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.021 "name": "raid_bdev1", 00:14:33.021 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:33.021 "strip_size_kb": 64, 00:14:33.021 "state": "online", 00:14:33.021 "raid_level": "raid5f", 00:14:33.021 "superblock": true, 00:14:33.021 "num_base_bdevs": 3, 00:14:33.021 "num_base_bdevs_discovered": 2, 00:14:33.021 "num_base_bdevs_operational": 2, 00:14:33.021 "base_bdevs_list": [ 00:14:33.021 { 00:14:33.021 "name": null, 00:14:33.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.021 "is_configured": false, 00:14:33.021 "data_offset": 2048, 00:14:33.021 "data_size": 63488 00:14:33.021 }, 00:14:33.021 { 00:14:33.021 "name": "pt2", 00:14:33.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.021 "is_configured": true, 00:14:33.021 "data_offset": 2048, 00:14:33.021 "data_size": 63488 00:14:33.021 }, 00:14:33.021 { 00:14:33.021 "name": "pt3", 00:14:33.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.021 "is_configured": true, 00:14:33.021 "data_offset": 2048, 00:14:33.021 "data_size": 63488 00:14:33.021 } 00:14:33.021 ] 00:14:33.021 }' 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.021 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.591 [2024-11-16 18:55:16.769332] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.591 [2024-11-16 18:55:16.769358] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:33.591 [2024-11-16 18:55:16.769421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:33.591 [2024-11-16 18:55:16.769477] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:33.591 [2024-11-16 18:55:16.769486] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.591 [2024-11-16 18:55:16.841234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:33.591 [2024-11-16 18:55:16.841287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:33.591 [2024-11-16 18:55:16.841304] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:33.591 [2024-11-16 18:55:16.841312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:33.591 [2024-11-16 18:55:16.843465] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:33.591 [2024-11-16 18:55:16.843501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:33.591 [2024-11-16 18:55:16.843574] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:14:33.591 [2024-11-16 18:55:16.843624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:14:33.591 [2024-11-16 18:55:16.843778] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:14:33.591 [2024-11-16 18:55:16.843793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:33.591 [2024-11-16 18:55:16.843813] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:14:33.591 [2024-11-16 18:55:16.843879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:14:33.591 pt1 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.591 "name": "raid_bdev1", 00:14:33.591 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:33.591 "strip_size_kb": 64, 00:14:33.591 "state": "configuring", 00:14:33.591 "raid_level": "raid5f", 00:14:33.591 "superblock": true, 00:14:33.591 "num_base_bdevs": 3, 00:14:33.591 "num_base_bdevs_discovered": 1, 00:14:33.591 "num_base_bdevs_operational": 2, 00:14:33.591 "base_bdevs_list": [ 00:14:33.591 { 00:14:33.591 "name": null, 00:14:33.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.591 "is_configured": false, 00:14:33.591 "data_offset": 2048, 00:14:33.591 "data_size": 63488 00:14:33.591 }, 00:14:33.591 { 00:14:33.591 "name": "pt2", 00:14:33.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:33.591 "is_configured": true, 00:14:33.591 "data_offset": 2048, 00:14:33.591 "data_size": 63488 00:14:33.591 }, 00:14:33.591 { 00:14:33.591 "name": null, 00:14:33.591 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:33.591 "is_configured": false, 00:14:33.591 "data_offset": 2048, 00:14:33.591 "data_size": 63488 00:14:33.591 } 00:14:33.591 ] 00:14:33.591 }' 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.591 18:55:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.851 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:14:33.851 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.851 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:33.851 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:33.851 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.111 [2024-11-16 18:55:17.336390] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:34.111 [2024-11-16 18:55:17.336493] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.111 [2024-11-16 18:55:17.336531] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:34.111 [2024-11-16 18:55:17.336560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.111 [2024-11-16 18:55:17.337026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.111 [2024-11-16 18:55:17.337083] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:34.111 [2024-11-16 18:55:17.337188] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:14:34.111 [2024-11-16 18:55:17.337240] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:14:34.111 [2024-11-16 18:55:17.337387] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:14:34.111 [2024-11-16 18:55:17.337423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:34.111 [2024-11-16 18:55:17.337694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:34.111 [2024-11-16 18:55:17.343455] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:14:34.111 [2024-11-16 18:55:17.343514] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:14:34.111 [2024-11-16 18:55:17.343828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:34.111 pt3 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:34.111 "name": "raid_bdev1", 00:14:34.111 "uuid": "2d5c49b0-92cc-468a-aa92-efc131c102b6", 00:14:34.111 "strip_size_kb": 64, 00:14:34.111 "state": "online", 00:14:34.111 "raid_level": "raid5f", 00:14:34.111 "superblock": true, 00:14:34.111 "num_base_bdevs": 3, 00:14:34.111 "num_base_bdevs_discovered": 2, 00:14:34.111 "num_base_bdevs_operational": 2, 00:14:34.111 "base_bdevs_list": [ 00:14:34.111 { 00:14:34.111 "name": null, 00:14:34.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.111 "is_configured": false, 00:14:34.111 "data_offset": 2048, 00:14:34.111 "data_size": 63488 00:14:34.111 }, 00:14:34.111 { 00:14:34.111 "name": "pt2", 00:14:34.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:14:34.111 "is_configured": true, 00:14:34.111 "data_offset": 2048, 00:14:34.111 "data_size": 63488 00:14:34.111 }, 00:14:34.111 { 00:14:34.111 "name": "pt3", 00:14:34.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:14:34.111 "is_configured": true, 00:14:34.111 "data_offset": 2048, 00:14:34.111 "data_size": 63488 00:14:34.111 } 00:14:34.111 ] 00:14:34.111 }' 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:34.111 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.371 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.371 [2024-11-16 18:55:17.825725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 2d5c49b0-92cc-468a-aa92-efc131c102b6 '!=' 2d5c49b0-92cc-468a-aa92-efc131c102b6 ']' 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80832 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80832 ']' 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80832 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80832 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:34.630 killing process with pid 80832 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80832' 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80832 00:14:34.630 [2024-11-16 18:55:17.896340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.630 [2024-11-16 18:55:17.896425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.630 [2024-11-16 18:55:17.896486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.630 [2024-11-16 18:55:17.896498] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:14:34.630 18:55:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80832 00:14:34.897 [2024-11-16 18:55:18.174335] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.838 18:55:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:14:35.838 00:14:35.838 real 0m7.382s 00:14:35.838 user 0m11.565s 00:14:35.838 sys 0m1.252s 00:14:35.838 18:55:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:35.838 ************************************ 00:14:35.838 END TEST raid5f_superblock_test 00:14:35.838 ************************************ 00:14:35.838 18:55:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.838 18:55:19 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:14:35.838 18:55:19 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:14:35.838 18:55:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:35.838 18:55:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:35.838 18:55:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.838 ************************************ 00:14:35.838 START TEST raid5f_rebuild_test 00:14:35.838 ************************************ 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.838 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81265 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81265 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81265 ']' 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:35.839 18:55:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.098 [2024-11-16 18:55:19.369711] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:36.098 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:36.098 Zero copy mechanism will not be used. 00:14:36.098 [2024-11-16 18:55:19.369890] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81265 ] 00:14:36.098 [2024-11-16 18:55:19.542089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.357 [2024-11-16 18:55:19.644362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.617 [2024-11-16 18:55:19.837331] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.617 [2024-11-16 18:55:19.837388] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.876 BaseBdev1_malloc 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.876 [2024-11-16 18:55:20.220075] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.876 [2024-11-16 18:55:20.220145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.876 [2024-11-16 18:55:20.220165] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.876 [2024-11-16 18:55:20.220175] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.876 [2024-11-16 18:55:20.222208] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.876 [2024-11-16 18:55:20.222321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.876 BaseBdev1 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.876 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.877 BaseBdev2_malloc 00:14:36.877 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.877 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:36.877 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.877 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:36.877 [2024-11-16 18:55:20.271700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:36.877 [2024-11-16 18:55:20.271755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.877 [2024-11-16 18:55:20.271771] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.877 [2024-11-16 18:55:20.271782] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.877 [2024-11-16 18:55:20.273765] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.877 [2024-11-16 18:55:20.273800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.877 BaseBdev2 00:14:36.877 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.877 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.877 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.877 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.877 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.137 BaseBdev3_malloc 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.137 [2024-11-16 18:55:20.362510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:37.137 [2024-11-16 18:55:20.362560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.137 [2024-11-16 18:55:20.362578] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:37.137 [2024-11-16 18:55:20.362588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.137 [2024-11-16 18:55:20.364567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.137 [2024-11-16 18:55:20.364608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:37.137 BaseBdev3 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.137 spare_malloc 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.137 spare_delay 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.137 [2024-11-16 18:55:20.426973] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.137 [2024-11-16 18:55:20.427023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.137 [2024-11-16 18:55:20.427041] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:37.137 [2024-11-16 18:55:20.427051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.137 [2024-11-16 18:55:20.429148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.137 [2024-11-16 18:55:20.429190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.137 spare 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.137 [2024-11-16 18:55:20.439014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.137 [2024-11-16 18:55:20.440935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.137 [2024-11-16 18:55:20.440994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.137 [2024-11-16 18:55:20.441076] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:37.137 [2024-11-16 18:55:20.441087] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:37.137 [2024-11-16 18:55:20.441337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:37.137 [2024-11-16 18:55:20.446717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:37.137 [2024-11-16 18:55:20.446739] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:37.137 [2024-11-16 18:55:20.446921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.137 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.137 "name": "raid_bdev1", 00:14:37.137 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:37.137 "strip_size_kb": 64, 00:14:37.137 "state": "online", 00:14:37.137 "raid_level": "raid5f", 00:14:37.137 "superblock": false, 00:14:37.137 "num_base_bdevs": 3, 00:14:37.137 "num_base_bdevs_discovered": 3, 00:14:37.137 "num_base_bdevs_operational": 3, 00:14:37.137 "base_bdevs_list": [ 00:14:37.137 { 00:14:37.137 "name": "BaseBdev1", 00:14:37.137 "uuid": "656aafd8-e308-5b08-a178-9e42174a7a28", 00:14:37.137 "is_configured": true, 00:14:37.137 "data_offset": 0, 00:14:37.137 "data_size": 65536 00:14:37.137 }, 00:14:37.137 { 00:14:37.137 "name": "BaseBdev2", 00:14:37.137 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:37.137 "is_configured": true, 00:14:37.137 "data_offset": 0, 00:14:37.138 "data_size": 65536 00:14:37.138 }, 00:14:37.138 { 00:14:37.138 "name": "BaseBdev3", 00:14:37.138 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:37.138 "is_configured": true, 00:14:37.138 "data_offset": 0, 00:14:37.138 "data_size": 65536 00:14:37.138 } 00:14:37.138 ] 00:14:37.138 }' 00:14:37.138 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.138 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.708 [2024-11-16 18:55:20.896993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.708 18:55:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:37.708 [2024-11-16 18:55:21.164417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:37.968 /dev/nbd0 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.968 1+0 records in 00:14:37.968 1+0 records out 00:14:37.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449286 s, 9.1 MB/s 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:37.968 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:14:38.229 512+0 records in 00:14:38.229 512+0 records out 00:14:38.229 67108864 bytes (67 MB, 64 MiB) copied, 0.35558 s, 189 MB/s 00:14:38.229 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:38.229 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.229 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:38.229 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:38.229 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:38.229 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:38.229 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:38.489 [2024-11-16 18:55:21.805771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.489 [2024-11-16 18:55:21.821505] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.489 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.490 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.490 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.490 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.490 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:38.490 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.490 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.490 "name": "raid_bdev1", 00:14:38.490 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:38.490 "strip_size_kb": 64, 00:14:38.490 "state": "online", 00:14:38.490 "raid_level": "raid5f", 00:14:38.490 "superblock": false, 00:14:38.490 "num_base_bdevs": 3, 00:14:38.490 "num_base_bdevs_discovered": 2, 00:14:38.490 "num_base_bdevs_operational": 2, 00:14:38.490 "base_bdevs_list": [ 00:14:38.490 { 00:14:38.490 "name": null, 00:14:38.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.490 "is_configured": false, 00:14:38.490 "data_offset": 0, 00:14:38.490 "data_size": 65536 00:14:38.490 }, 00:14:38.490 { 00:14:38.490 "name": "BaseBdev2", 00:14:38.490 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:38.490 "is_configured": true, 00:14:38.490 "data_offset": 0, 00:14:38.490 "data_size": 65536 00:14:38.490 }, 00:14:38.490 { 00:14:38.490 "name": "BaseBdev3", 00:14:38.490 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:38.490 "is_configured": true, 00:14:38.490 "data_offset": 0, 00:14:38.490 "data_size": 65536 00:14:38.490 } 00:14:38.490 ] 00:14:38.490 }' 00:14:38.490 18:55:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.490 18:55:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.059 18:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:39.059 18:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.059 18:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:39.059 [2024-11-16 18:55:22.252759] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:39.059 [2024-11-16 18:55:22.267780] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:14:39.059 18:55:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.059 18:55:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:39.059 [2024-11-16 18:55:22.275124] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.001 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.001 "name": "raid_bdev1", 00:14:40.001 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:40.001 "strip_size_kb": 64, 00:14:40.001 "state": "online", 00:14:40.002 "raid_level": "raid5f", 00:14:40.002 "superblock": false, 00:14:40.002 "num_base_bdevs": 3, 00:14:40.002 "num_base_bdevs_discovered": 3, 00:14:40.002 "num_base_bdevs_operational": 3, 00:14:40.002 "process": { 00:14:40.002 "type": "rebuild", 00:14:40.002 "target": "spare", 00:14:40.002 "progress": { 00:14:40.002 "blocks": 20480, 00:14:40.002 "percent": 15 00:14:40.002 } 00:14:40.002 }, 00:14:40.002 "base_bdevs_list": [ 00:14:40.002 { 00:14:40.002 "name": "spare", 00:14:40.002 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:40.002 "is_configured": true, 00:14:40.002 "data_offset": 0, 00:14:40.002 "data_size": 65536 00:14:40.002 }, 00:14:40.002 { 00:14:40.002 "name": "BaseBdev2", 00:14:40.002 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:40.002 "is_configured": true, 00:14:40.002 "data_offset": 0, 00:14:40.002 "data_size": 65536 00:14:40.002 }, 00:14:40.002 { 00:14:40.002 "name": "BaseBdev3", 00:14:40.002 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:40.002 "is_configured": true, 00:14:40.002 "data_offset": 0, 00:14:40.002 "data_size": 65536 00:14:40.002 } 00:14:40.002 ] 00:14:40.002 }' 00:14:40.002 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.002 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.002 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.002 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.002 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:40.002 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.002 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.002 [2024-11-16 18:55:23.406243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.262 [2024-11-16 18:55:23.482313] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:40.262 [2024-11-16 18:55:23.482365] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.262 [2024-11-16 18:55:23.482384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:40.262 [2024-11-16 18:55:23.482391] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.262 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.263 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.263 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.263 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.263 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.263 "name": "raid_bdev1", 00:14:40.263 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:40.263 "strip_size_kb": 64, 00:14:40.263 "state": "online", 00:14:40.263 "raid_level": "raid5f", 00:14:40.263 "superblock": false, 00:14:40.263 "num_base_bdevs": 3, 00:14:40.263 "num_base_bdevs_discovered": 2, 00:14:40.263 "num_base_bdevs_operational": 2, 00:14:40.263 "base_bdevs_list": [ 00:14:40.263 { 00:14:40.263 "name": null, 00:14:40.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.263 "is_configured": false, 00:14:40.263 "data_offset": 0, 00:14:40.263 "data_size": 65536 00:14:40.263 }, 00:14:40.263 { 00:14:40.263 "name": "BaseBdev2", 00:14:40.263 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:40.263 "is_configured": true, 00:14:40.263 "data_offset": 0, 00:14:40.263 "data_size": 65536 00:14:40.263 }, 00:14:40.263 { 00:14:40.263 "name": "BaseBdev3", 00:14:40.263 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:40.263 "is_configured": true, 00:14:40.263 "data_offset": 0, 00:14:40.263 "data_size": 65536 00:14:40.263 } 00:14:40.263 ] 00:14:40.263 }' 00:14:40.263 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.263 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.523 "name": "raid_bdev1", 00:14:40.523 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:40.523 "strip_size_kb": 64, 00:14:40.523 "state": "online", 00:14:40.523 "raid_level": "raid5f", 00:14:40.523 "superblock": false, 00:14:40.523 "num_base_bdevs": 3, 00:14:40.523 "num_base_bdevs_discovered": 2, 00:14:40.523 "num_base_bdevs_operational": 2, 00:14:40.523 "base_bdevs_list": [ 00:14:40.523 { 00:14:40.523 "name": null, 00:14:40.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.523 "is_configured": false, 00:14:40.523 "data_offset": 0, 00:14:40.523 "data_size": 65536 00:14:40.523 }, 00:14:40.523 { 00:14:40.523 "name": "BaseBdev2", 00:14:40.523 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:40.523 "is_configured": true, 00:14:40.523 "data_offset": 0, 00:14:40.523 "data_size": 65536 00:14:40.523 }, 00:14:40.523 { 00:14:40.523 "name": "BaseBdev3", 00:14:40.523 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:40.523 "is_configured": true, 00:14:40.523 "data_offset": 0, 00:14:40.523 "data_size": 65536 00:14:40.523 } 00:14:40.523 ] 00:14:40.523 }' 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:40.523 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.783 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:40.783 18:55:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:40.783 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.783 18:55:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:40.783 [2024-11-16 18:55:23.999281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.783 [2024-11-16 18:55:24.014487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:14:40.783 18:55:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.783 18:55:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:40.783 [2024-11-16 18:55:24.021631] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:41.723 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.723 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.723 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.723 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.723 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.723 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.723 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.723 18:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.724 "name": "raid_bdev1", 00:14:41.724 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:41.724 "strip_size_kb": 64, 00:14:41.724 "state": "online", 00:14:41.724 "raid_level": "raid5f", 00:14:41.724 "superblock": false, 00:14:41.724 "num_base_bdevs": 3, 00:14:41.724 "num_base_bdevs_discovered": 3, 00:14:41.724 "num_base_bdevs_operational": 3, 00:14:41.724 "process": { 00:14:41.724 "type": "rebuild", 00:14:41.724 "target": "spare", 00:14:41.724 "progress": { 00:14:41.724 "blocks": 20480, 00:14:41.724 "percent": 15 00:14:41.724 } 00:14:41.724 }, 00:14:41.724 "base_bdevs_list": [ 00:14:41.724 { 00:14:41.724 "name": "spare", 00:14:41.724 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:41.724 "is_configured": true, 00:14:41.724 "data_offset": 0, 00:14:41.724 "data_size": 65536 00:14:41.724 }, 00:14:41.724 { 00:14:41.724 "name": "BaseBdev2", 00:14:41.724 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:41.724 "is_configured": true, 00:14:41.724 "data_offset": 0, 00:14:41.724 "data_size": 65536 00:14:41.724 }, 00:14:41.724 { 00:14:41.724 "name": "BaseBdev3", 00:14:41.724 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:41.724 "is_configured": true, 00:14:41.724 "data_offset": 0, 00:14:41.724 "data_size": 65536 00:14:41.724 } 00:14:41.724 ] 00:14:41.724 }' 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=527 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:41.724 18:55:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.984 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.984 "name": "raid_bdev1", 00:14:41.984 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:41.984 "strip_size_kb": 64, 00:14:41.984 "state": "online", 00:14:41.984 "raid_level": "raid5f", 00:14:41.984 "superblock": false, 00:14:41.984 "num_base_bdevs": 3, 00:14:41.984 "num_base_bdevs_discovered": 3, 00:14:41.984 "num_base_bdevs_operational": 3, 00:14:41.984 "process": { 00:14:41.984 "type": "rebuild", 00:14:41.984 "target": "spare", 00:14:41.984 "progress": { 00:14:41.984 "blocks": 22528, 00:14:41.984 "percent": 17 00:14:41.984 } 00:14:41.984 }, 00:14:41.984 "base_bdevs_list": [ 00:14:41.984 { 00:14:41.984 "name": "spare", 00:14:41.984 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:41.984 "is_configured": true, 00:14:41.984 "data_offset": 0, 00:14:41.984 "data_size": 65536 00:14:41.984 }, 00:14:41.984 { 00:14:41.984 "name": "BaseBdev2", 00:14:41.984 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:41.984 "is_configured": true, 00:14:41.984 "data_offset": 0, 00:14:41.984 "data_size": 65536 00:14:41.984 }, 00:14:41.984 { 00:14:41.984 "name": "BaseBdev3", 00:14:41.984 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:41.984 "is_configured": true, 00:14:41.984 "data_offset": 0, 00:14:41.984 "data_size": 65536 00:14:41.984 } 00:14:41.984 ] 00:14:41.984 }' 00:14:41.984 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.984 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:41.984 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.984 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.984 18:55:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.924 "name": "raid_bdev1", 00:14:42.924 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:42.924 "strip_size_kb": 64, 00:14:42.924 "state": "online", 00:14:42.924 "raid_level": "raid5f", 00:14:42.924 "superblock": false, 00:14:42.924 "num_base_bdevs": 3, 00:14:42.924 "num_base_bdevs_discovered": 3, 00:14:42.924 "num_base_bdevs_operational": 3, 00:14:42.924 "process": { 00:14:42.924 "type": "rebuild", 00:14:42.924 "target": "spare", 00:14:42.924 "progress": { 00:14:42.924 "blocks": 45056, 00:14:42.924 "percent": 34 00:14:42.924 } 00:14:42.924 }, 00:14:42.924 "base_bdevs_list": [ 00:14:42.924 { 00:14:42.924 "name": "spare", 00:14:42.924 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:42.924 "is_configured": true, 00:14:42.924 "data_offset": 0, 00:14:42.924 "data_size": 65536 00:14:42.924 }, 00:14:42.924 { 00:14:42.924 "name": "BaseBdev2", 00:14:42.924 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:42.924 "is_configured": true, 00:14:42.924 "data_offset": 0, 00:14:42.924 "data_size": 65536 00:14:42.924 }, 00:14:42.924 { 00:14:42.924 "name": "BaseBdev3", 00:14:42.924 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:42.924 "is_configured": true, 00:14:42.924 "data_offset": 0, 00:14:42.924 "data_size": 65536 00:14:42.924 } 00:14:42.924 ] 00:14:42.924 }' 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.924 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.184 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:43.184 18:55:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.124 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.124 "name": "raid_bdev1", 00:14:44.124 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:44.124 "strip_size_kb": 64, 00:14:44.124 "state": "online", 00:14:44.124 "raid_level": "raid5f", 00:14:44.124 "superblock": false, 00:14:44.124 "num_base_bdevs": 3, 00:14:44.124 "num_base_bdevs_discovered": 3, 00:14:44.124 "num_base_bdevs_operational": 3, 00:14:44.124 "process": { 00:14:44.124 "type": "rebuild", 00:14:44.124 "target": "spare", 00:14:44.124 "progress": { 00:14:44.124 "blocks": 67584, 00:14:44.124 "percent": 51 00:14:44.124 } 00:14:44.124 }, 00:14:44.124 "base_bdevs_list": [ 00:14:44.124 { 00:14:44.124 "name": "spare", 00:14:44.124 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:44.124 "is_configured": true, 00:14:44.124 "data_offset": 0, 00:14:44.124 "data_size": 65536 00:14:44.124 }, 00:14:44.124 { 00:14:44.124 "name": "BaseBdev2", 00:14:44.124 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:44.124 "is_configured": true, 00:14:44.124 "data_offset": 0, 00:14:44.124 "data_size": 65536 00:14:44.124 }, 00:14:44.124 { 00:14:44.124 "name": "BaseBdev3", 00:14:44.124 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:44.124 "is_configured": true, 00:14:44.124 "data_offset": 0, 00:14:44.124 "data_size": 65536 00:14:44.124 } 00:14:44.124 ] 00:14:44.125 }' 00:14:44.125 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.125 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.125 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.125 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.125 18:55:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.506 "name": "raid_bdev1", 00:14:45.506 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:45.506 "strip_size_kb": 64, 00:14:45.506 "state": "online", 00:14:45.506 "raid_level": "raid5f", 00:14:45.506 "superblock": false, 00:14:45.506 "num_base_bdevs": 3, 00:14:45.506 "num_base_bdevs_discovered": 3, 00:14:45.506 "num_base_bdevs_operational": 3, 00:14:45.506 "process": { 00:14:45.506 "type": "rebuild", 00:14:45.506 "target": "spare", 00:14:45.506 "progress": { 00:14:45.506 "blocks": 92160, 00:14:45.506 "percent": 70 00:14:45.506 } 00:14:45.506 }, 00:14:45.506 "base_bdevs_list": [ 00:14:45.506 { 00:14:45.506 "name": "spare", 00:14:45.506 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:45.506 "is_configured": true, 00:14:45.506 "data_offset": 0, 00:14:45.506 "data_size": 65536 00:14:45.506 }, 00:14:45.506 { 00:14:45.506 "name": "BaseBdev2", 00:14:45.506 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:45.506 "is_configured": true, 00:14:45.506 "data_offset": 0, 00:14:45.506 "data_size": 65536 00:14:45.506 }, 00:14:45.506 { 00:14:45.506 "name": "BaseBdev3", 00:14:45.506 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:45.506 "is_configured": true, 00:14:45.506 "data_offset": 0, 00:14:45.506 "data_size": 65536 00:14:45.506 } 00:14:45.506 ] 00:14:45.506 }' 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.506 18:55:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.445 "name": "raid_bdev1", 00:14:46.445 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:46.445 "strip_size_kb": 64, 00:14:46.445 "state": "online", 00:14:46.445 "raid_level": "raid5f", 00:14:46.445 "superblock": false, 00:14:46.445 "num_base_bdevs": 3, 00:14:46.445 "num_base_bdevs_discovered": 3, 00:14:46.445 "num_base_bdevs_operational": 3, 00:14:46.445 "process": { 00:14:46.445 "type": "rebuild", 00:14:46.445 "target": "spare", 00:14:46.445 "progress": { 00:14:46.445 "blocks": 114688, 00:14:46.445 "percent": 87 00:14:46.445 } 00:14:46.445 }, 00:14:46.445 "base_bdevs_list": [ 00:14:46.445 { 00:14:46.445 "name": "spare", 00:14:46.445 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:46.445 "is_configured": true, 00:14:46.445 "data_offset": 0, 00:14:46.445 "data_size": 65536 00:14:46.445 }, 00:14:46.445 { 00:14:46.445 "name": "BaseBdev2", 00:14:46.445 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:46.445 "is_configured": true, 00:14:46.445 "data_offset": 0, 00:14:46.445 "data_size": 65536 00:14:46.445 }, 00:14:46.445 { 00:14:46.445 "name": "BaseBdev3", 00:14:46.445 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:46.445 "is_configured": true, 00:14:46.445 "data_offset": 0, 00:14:46.445 "data_size": 65536 00:14:46.445 } 00:14:46.445 ] 00:14:46.445 }' 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.445 18:55:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:47.014 [2024-11-16 18:55:30.456465] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:47.014 [2024-11-16 18:55:30.456532] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:47.014 [2024-11-16 18:55:30.456572] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.583 "name": "raid_bdev1", 00:14:47.583 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:47.583 "strip_size_kb": 64, 00:14:47.583 "state": "online", 00:14:47.583 "raid_level": "raid5f", 00:14:47.583 "superblock": false, 00:14:47.583 "num_base_bdevs": 3, 00:14:47.583 "num_base_bdevs_discovered": 3, 00:14:47.583 "num_base_bdevs_operational": 3, 00:14:47.583 "base_bdevs_list": [ 00:14:47.583 { 00:14:47.583 "name": "spare", 00:14:47.583 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:47.583 "is_configured": true, 00:14:47.583 "data_offset": 0, 00:14:47.583 "data_size": 65536 00:14:47.583 }, 00:14:47.583 { 00:14:47.583 "name": "BaseBdev2", 00:14:47.583 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:47.583 "is_configured": true, 00:14:47.583 "data_offset": 0, 00:14:47.583 "data_size": 65536 00:14:47.583 }, 00:14:47.583 { 00:14:47.583 "name": "BaseBdev3", 00:14:47.583 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:47.583 "is_configured": true, 00:14:47.583 "data_offset": 0, 00:14:47.583 "data_size": 65536 00:14:47.583 } 00:14:47.583 ] 00:14:47.583 }' 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.583 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.583 "name": "raid_bdev1", 00:14:47.583 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:47.583 "strip_size_kb": 64, 00:14:47.583 "state": "online", 00:14:47.583 "raid_level": "raid5f", 00:14:47.583 "superblock": false, 00:14:47.583 "num_base_bdevs": 3, 00:14:47.583 "num_base_bdevs_discovered": 3, 00:14:47.583 "num_base_bdevs_operational": 3, 00:14:47.583 "base_bdevs_list": [ 00:14:47.583 { 00:14:47.584 "name": "spare", 00:14:47.584 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:47.584 "is_configured": true, 00:14:47.584 "data_offset": 0, 00:14:47.584 "data_size": 65536 00:14:47.584 }, 00:14:47.584 { 00:14:47.584 "name": "BaseBdev2", 00:14:47.584 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:47.584 "is_configured": true, 00:14:47.584 "data_offset": 0, 00:14:47.584 "data_size": 65536 00:14:47.584 }, 00:14:47.584 { 00:14:47.584 "name": "BaseBdev3", 00:14:47.584 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:47.584 "is_configured": true, 00:14:47.584 "data_offset": 0, 00:14:47.584 "data_size": 65536 00:14:47.584 } 00:14:47.584 ] 00:14:47.584 }' 00:14:47.584 18:55:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.584 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.584 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.843 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:47.844 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.844 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.844 "name": "raid_bdev1", 00:14:47.844 "uuid": "440084f5-5154-46f5-bdac-11a1a7563b23", 00:14:47.844 "strip_size_kb": 64, 00:14:47.844 "state": "online", 00:14:47.844 "raid_level": "raid5f", 00:14:47.844 "superblock": false, 00:14:47.844 "num_base_bdevs": 3, 00:14:47.844 "num_base_bdevs_discovered": 3, 00:14:47.844 "num_base_bdevs_operational": 3, 00:14:47.844 "base_bdevs_list": [ 00:14:47.844 { 00:14:47.844 "name": "spare", 00:14:47.844 "uuid": "822861da-0c86-50d1-863a-27c7abbdeeda", 00:14:47.844 "is_configured": true, 00:14:47.844 "data_offset": 0, 00:14:47.844 "data_size": 65536 00:14:47.844 }, 00:14:47.844 { 00:14:47.844 "name": "BaseBdev2", 00:14:47.844 "uuid": "2d5f5032-d57b-5f3a-b39b-e64f1352c4bc", 00:14:47.844 "is_configured": true, 00:14:47.844 "data_offset": 0, 00:14:47.844 "data_size": 65536 00:14:47.844 }, 00:14:47.844 { 00:14:47.844 "name": "BaseBdev3", 00:14:47.844 "uuid": "01fd115f-1c94-502a-94c5-11e31e8bed86", 00:14:47.844 "is_configured": true, 00:14:47.844 "data_offset": 0, 00:14:47.844 "data_size": 65536 00:14:47.844 } 00:14:47.844 ] 00:14:47.844 }' 00:14:47.844 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.844 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.104 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:48.104 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.104 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.104 [2024-11-16 18:55:31.492871] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:48.104 [2024-11-16 18:55:31.492898] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:48.104 [2024-11-16 18:55:31.492983] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.104 [2024-11-16 18:55:31.493059] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.105 [2024-11-16 18:55:31.493074] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.105 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:48.365 /dev/nbd0 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.365 1+0 records in 00:14:48.365 1+0 records out 00:14:48.365 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223301 s, 18.3 MB/s 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.365 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:48.625 /dev/nbd1 00:14:48.625 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:48.625 18:55:31 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:48.625 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:48.625 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:48.625 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:48.625 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:48.625 18:55:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.625 1+0 records in 00:14:48.625 1+0 records out 00:14:48.625 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411276 s, 10.0 MB/s 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:48.625 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:48.885 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:48.886 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:48.886 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:48.886 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:48.886 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:48.886 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:48.886 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81265 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81265 ']' 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81265 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:49.148 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81265 00:14:49.407 killing process with pid 81265 00:14:49.407 Received shutdown signal, test time was about 60.000000 seconds 00:14:49.407 00:14:49.407 Latency(us) 00:14:49.407 [2024-11-16T18:55:32.879Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.407 [2024-11-16T18:55:32.879Z] =================================================================================================================== 00:14:49.407 [2024-11-16T18:55:32.879Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:49.407 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:49.407 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:49.407 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81265' 00:14:49.407 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81265 00:14:49.407 [2024-11-16 18:55:32.648904] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:49.407 18:55:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81265 00:14:49.666 [2024-11-16 18:55:33.020193] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.605 ************************************ 00:14:50.605 END TEST raid5f_rebuild_test 00:14:50.605 ************************************ 00:14:50.605 18:55:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:50.605 00:14:50.605 real 0m14.757s 00:14:50.605 user 0m17.992s 00:14:50.605 sys 0m1.901s 00:14:50.605 18:55:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.605 18:55:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:50.865 18:55:34 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:14:50.865 18:55:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:50.865 18:55:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.865 18:55:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.865 ************************************ 00:14:50.865 START TEST raid5f_rebuild_test_sb 00:14:50.865 ************************************ 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.865 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81706 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81706 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81706 ']' 00:14:50.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.866 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.866 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:50.866 Zero copy mechanism will not be used. 00:14:50.866 [2024-11-16 18:55:34.186038] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:14:50.866 [2024-11-16 18:55:34.186167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81706 ] 00:14:51.126 [2024-11-16 18:55:34.356069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.126 [2024-11-16 18:55:34.457145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.385 [2024-11-16 18:55:34.645860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.385 [2024-11-16 18:55:34.645914] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.646 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.646 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:51.646 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.646 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.646 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.646 18:55:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.646 BaseBdev1_malloc 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.646 [2024-11-16 18:55:35.046935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:51.646 [2024-11-16 18:55:35.047082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.646 [2024-11-16 18:55:35.047126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:51.646 [2024-11-16 18:55:35.047161] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.646 [2024-11-16 18:55:35.049271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.646 [2024-11-16 18:55:35.049311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.646 BaseBdev1 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.646 BaseBdev2_malloc 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.646 [2024-11-16 18:55:35.096463] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:51.646 [2024-11-16 18:55:35.096537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.646 [2024-11-16 18:55:35.096555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:51.646 [2024-11-16 18:55:35.096566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.646 [2024-11-16 18:55:35.098585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.646 [2024-11-16 18:55:35.098706] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:51.646 BaseBdev2 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.646 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.906 BaseBdev3_malloc 00:14:51.906 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.906 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:51.906 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.906 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.906 [2024-11-16 18:55:35.182246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:51.906 [2024-11-16 18:55:35.182299] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.906 [2024-11-16 18:55:35.182317] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:51.907 [2024-11-16 18:55:35.182327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.907 [2024-11-16 18:55:35.184359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.907 [2024-11-16 18:55:35.184401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:51.907 BaseBdev3 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.907 spare_malloc 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.907 spare_delay 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.907 [2024-11-16 18:55:35.247461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:51.907 [2024-11-16 18:55:35.247511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.907 [2024-11-16 18:55:35.247527] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:14:51.907 [2024-11-16 18:55:35.247537] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.907 [2024-11-16 18:55:35.249602] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.907 [2024-11-16 18:55:35.249715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:51.907 spare 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.907 [2024-11-16 18:55:35.259505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.907 [2024-11-16 18:55:35.261234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.907 [2024-11-16 18:55:35.261344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:51.907 [2024-11-16 18:55:35.261512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:51.907 [2024-11-16 18:55:35.261527] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:14:51.907 [2024-11-16 18:55:35.261768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:14:51.907 [2024-11-16 18:55:35.266999] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:51.907 [2024-11-16 18:55:35.267020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:51.907 [2024-11-16 18:55:35.267175] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.907 "name": "raid_bdev1", 00:14:51.907 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:14:51.907 "strip_size_kb": 64, 00:14:51.907 "state": "online", 00:14:51.907 "raid_level": "raid5f", 00:14:51.907 "superblock": true, 00:14:51.907 "num_base_bdevs": 3, 00:14:51.907 "num_base_bdevs_discovered": 3, 00:14:51.907 "num_base_bdevs_operational": 3, 00:14:51.907 "base_bdevs_list": [ 00:14:51.907 { 00:14:51.907 "name": "BaseBdev1", 00:14:51.907 "uuid": "41309a9d-b8f0-5392-886c-0a6477e3e95b", 00:14:51.907 "is_configured": true, 00:14:51.907 "data_offset": 2048, 00:14:51.907 "data_size": 63488 00:14:51.907 }, 00:14:51.907 { 00:14:51.907 "name": "BaseBdev2", 00:14:51.907 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:14:51.907 "is_configured": true, 00:14:51.907 "data_offset": 2048, 00:14:51.907 "data_size": 63488 00:14:51.907 }, 00:14:51.907 { 00:14:51.907 "name": "BaseBdev3", 00:14:51.907 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:14:51.907 "is_configured": true, 00:14:51.907 "data_offset": 2048, 00:14:51.907 "data_size": 63488 00:14:51.907 } 00:14:51.907 ] 00:14:51.907 }' 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.907 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:52.477 [2024-11-16 18:55:35.676907] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.477 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.478 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:52.478 [2024-11-16 18:55:35.932360] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:52.738 /dev/nbd0 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:52.738 1+0 records in 00:14:52.738 1+0 records out 00:14:52.738 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559759 s, 7.3 MB/s 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:14:52.738 18:55:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:14:52.998 496+0 records in 00:14:52.998 496+0 records out 00:14:52.998 65011712 bytes (65 MB, 62 MiB) copied, 0.348171 s, 187 MB/s 00:14:52.998 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:52.998 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:52.998 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:52.998 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:52.998 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:52.998 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:52.998 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:53.259 [2024-11-16 18:55:36.556727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.259 [2024-11-16 18:55:36.572236] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:53.259 "name": "raid_bdev1", 00:14:53.259 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:14:53.259 "strip_size_kb": 64, 00:14:53.259 "state": "online", 00:14:53.259 "raid_level": "raid5f", 00:14:53.259 "superblock": true, 00:14:53.259 "num_base_bdevs": 3, 00:14:53.259 "num_base_bdevs_discovered": 2, 00:14:53.259 "num_base_bdevs_operational": 2, 00:14:53.259 "base_bdevs_list": [ 00:14:53.259 { 00:14:53.259 "name": null, 00:14:53.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.259 "is_configured": false, 00:14:53.259 "data_offset": 0, 00:14:53.259 "data_size": 63488 00:14:53.259 }, 00:14:53.259 { 00:14:53.259 "name": "BaseBdev2", 00:14:53.259 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:14:53.259 "is_configured": true, 00:14:53.259 "data_offset": 2048, 00:14:53.259 "data_size": 63488 00:14:53.259 }, 00:14:53.259 { 00:14:53.259 "name": "BaseBdev3", 00:14:53.259 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:14:53.259 "is_configured": true, 00:14:53.259 "data_offset": 2048, 00:14:53.259 "data_size": 63488 00:14:53.259 } 00:14:53.259 ] 00:14:53.259 }' 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:53.259 18:55:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.829 18:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:53.829 18:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.829 18:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.829 [2024-11-16 18:55:37.023637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.829 [2024-11-16 18:55:37.040039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:14:53.829 18:55:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.829 18:55:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:53.829 [2024-11-16 18:55:37.047494] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.769 "name": "raid_bdev1", 00:14:54.769 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:14:54.769 "strip_size_kb": 64, 00:14:54.769 "state": "online", 00:14:54.769 "raid_level": "raid5f", 00:14:54.769 "superblock": true, 00:14:54.769 "num_base_bdevs": 3, 00:14:54.769 "num_base_bdevs_discovered": 3, 00:14:54.769 "num_base_bdevs_operational": 3, 00:14:54.769 "process": { 00:14:54.769 "type": "rebuild", 00:14:54.769 "target": "spare", 00:14:54.769 "progress": { 00:14:54.769 "blocks": 20480, 00:14:54.769 "percent": 16 00:14:54.769 } 00:14:54.769 }, 00:14:54.769 "base_bdevs_list": [ 00:14:54.769 { 00:14:54.769 "name": "spare", 00:14:54.769 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:14:54.769 "is_configured": true, 00:14:54.769 "data_offset": 2048, 00:14:54.769 "data_size": 63488 00:14:54.769 }, 00:14:54.769 { 00:14:54.769 "name": "BaseBdev2", 00:14:54.769 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:14:54.769 "is_configured": true, 00:14:54.769 "data_offset": 2048, 00:14:54.769 "data_size": 63488 00:14:54.769 }, 00:14:54.769 { 00:14:54.769 "name": "BaseBdev3", 00:14:54.769 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:14:54.769 "is_configured": true, 00:14:54.769 "data_offset": 2048, 00:14:54.769 "data_size": 63488 00:14:54.769 } 00:14:54.769 ] 00:14:54.769 }' 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.769 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.769 [2024-11-16 18:55:38.190044] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.029 [2024-11-16 18:55:38.254875] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:55.029 [2024-11-16 18:55:38.254986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:55.029 [2024-11-16 18:55:38.255005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:55.029 [2024-11-16 18:55:38.255014] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.029 "name": "raid_bdev1", 00:14:55.029 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:14:55.029 "strip_size_kb": 64, 00:14:55.029 "state": "online", 00:14:55.029 "raid_level": "raid5f", 00:14:55.029 "superblock": true, 00:14:55.029 "num_base_bdevs": 3, 00:14:55.029 "num_base_bdevs_discovered": 2, 00:14:55.029 "num_base_bdevs_operational": 2, 00:14:55.029 "base_bdevs_list": [ 00:14:55.029 { 00:14:55.029 "name": null, 00:14:55.029 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.029 "is_configured": false, 00:14:55.029 "data_offset": 0, 00:14:55.029 "data_size": 63488 00:14:55.029 }, 00:14:55.029 { 00:14:55.029 "name": "BaseBdev2", 00:14:55.029 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:14:55.029 "is_configured": true, 00:14:55.029 "data_offset": 2048, 00:14:55.029 "data_size": 63488 00:14:55.029 }, 00:14:55.029 { 00:14:55.029 "name": "BaseBdev3", 00:14:55.029 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:14:55.029 "is_configured": true, 00:14:55.029 "data_offset": 2048, 00:14:55.029 "data_size": 63488 00:14:55.029 } 00:14:55.029 ] 00:14:55.029 }' 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.029 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.290 "name": "raid_bdev1", 00:14:55.290 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:14:55.290 "strip_size_kb": 64, 00:14:55.290 "state": "online", 00:14:55.290 "raid_level": "raid5f", 00:14:55.290 "superblock": true, 00:14:55.290 "num_base_bdevs": 3, 00:14:55.290 "num_base_bdevs_discovered": 2, 00:14:55.290 "num_base_bdevs_operational": 2, 00:14:55.290 "base_bdevs_list": [ 00:14:55.290 { 00:14:55.290 "name": null, 00:14:55.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.290 "is_configured": false, 00:14:55.290 "data_offset": 0, 00:14:55.290 "data_size": 63488 00:14:55.290 }, 00:14:55.290 { 00:14:55.290 "name": "BaseBdev2", 00:14:55.290 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:14:55.290 "is_configured": true, 00:14:55.290 "data_offset": 2048, 00:14:55.290 "data_size": 63488 00:14:55.290 }, 00:14:55.290 { 00:14:55.290 "name": "BaseBdev3", 00:14:55.290 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:14:55.290 "is_configured": true, 00:14:55.290 "data_offset": 2048, 00:14:55.290 "data_size": 63488 00:14:55.290 } 00:14:55.290 ] 00:14:55.290 }' 00:14:55.290 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:55.551 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:55.551 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:55.551 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:55.551 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:55.551 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.551 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.551 [2024-11-16 18:55:38.827402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:55.551 [2024-11-16 18:55:38.842858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:14:55.551 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.551 18:55:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:55.551 [2024-11-16 18:55:38.849968] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.492 "name": "raid_bdev1", 00:14:56.492 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:14:56.492 "strip_size_kb": 64, 00:14:56.492 "state": "online", 00:14:56.492 "raid_level": "raid5f", 00:14:56.492 "superblock": true, 00:14:56.492 "num_base_bdevs": 3, 00:14:56.492 "num_base_bdevs_discovered": 3, 00:14:56.492 "num_base_bdevs_operational": 3, 00:14:56.492 "process": { 00:14:56.492 "type": "rebuild", 00:14:56.492 "target": "spare", 00:14:56.492 "progress": { 00:14:56.492 "blocks": 20480, 00:14:56.492 "percent": 16 00:14:56.492 } 00:14:56.492 }, 00:14:56.492 "base_bdevs_list": [ 00:14:56.492 { 00:14:56.492 "name": "spare", 00:14:56.492 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:14:56.492 "is_configured": true, 00:14:56.492 "data_offset": 2048, 00:14:56.492 "data_size": 63488 00:14:56.492 }, 00:14:56.492 { 00:14:56.492 "name": "BaseBdev2", 00:14:56.492 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:14:56.492 "is_configured": true, 00:14:56.492 "data_offset": 2048, 00:14:56.492 "data_size": 63488 00:14:56.492 }, 00:14:56.492 { 00:14:56.492 "name": "BaseBdev3", 00:14:56.492 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:14:56.492 "is_configured": true, 00:14:56.492 "data_offset": 2048, 00:14:56.492 "data_size": 63488 00:14:56.492 } 00:14:56.492 ] 00:14:56.492 }' 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.492 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:56.752 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=541 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.752 18:55:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.752 "name": "raid_bdev1", 00:14:56.752 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:14:56.752 "strip_size_kb": 64, 00:14:56.752 "state": "online", 00:14:56.752 "raid_level": "raid5f", 00:14:56.752 "superblock": true, 00:14:56.752 "num_base_bdevs": 3, 00:14:56.752 "num_base_bdevs_discovered": 3, 00:14:56.752 "num_base_bdevs_operational": 3, 00:14:56.752 "process": { 00:14:56.752 "type": "rebuild", 00:14:56.752 "target": "spare", 00:14:56.752 "progress": { 00:14:56.752 "blocks": 22528, 00:14:56.752 "percent": 17 00:14:56.752 } 00:14:56.752 }, 00:14:56.752 "base_bdevs_list": [ 00:14:56.752 { 00:14:56.752 "name": "spare", 00:14:56.752 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:14:56.752 "is_configured": true, 00:14:56.752 "data_offset": 2048, 00:14:56.752 "data_size": 63488 00:14:56.752 }, 00:14:56.752 { 00:14:56.752 "name": "BaseBdev2", 00:14:56.752 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:14:56.752 "is_configured": true, 00:14:56.752 "data_offset": 2048, 00:14:56.752 "data_size": 63488 00:14:56.752 }, 00:14:56.752 { 00:14:56.752 "name": "BaseBdev3", 00:14:56.752 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:14:56.752 "is_configured": true, 00:14:56.752 "data_offset": 2048, 00:14:56.752 "data_size": 63488 00:14:56.752 } 00:14:56.752 ] 00:14:56.752 }' 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.752 18:55:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:57.691 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.691 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.691 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.691 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.692 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.692 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.692 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.692 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.692 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.692 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.692 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.955 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.955 "name": "raid_bdev1", 00:14:57.955 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:14:57.955 "strip_size_kb": 64, 00:14:57.955 "state": "online", 00:14:57.955 "raid_level": "raid5f", 00:14:57.955 "superblock": true, 00:14:57.955 "num_base_bdevs": 3, 00:14:57.955 "num_base_bdevs_discovered": 3, 00:14:57.955 "num_base_bdevs_operational": 3, 00:14:57.955 "process": { 00:14:57.955 "type": "rebuild", 00:14:57.955 "target": "spare", 00:14:57.955 "progress": { 00:14:57.955 "blocks": 45056, 00:14:57.955 "percent": 35 00:14:57.955 } 00:14:57.955 }, 00:14:57.955 "base_bdevs_list": [ 00:14:57.955 { 00:14:57.955 "name": "spare", 00:14:57.955 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:14:57.955 "is_configured": true, 00:14:57.955 "data_offset": 2048, 00:14:57.955 "data_size": 63488 00:14:57.955 }, 00:14:57.955 { 00:14:57.955 "name": "BaseBdev2", 00:14:57.955 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:14:57.955 "is_configured": true, 00:14:57.955 "data_offset": 2048, 00:14:57.955 "data_size": 63488 00:14:57.955 }, 00:14:57.955 { 00:14:57.955 "name": "BaseBdev3", 00:14:57.955 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:14:57.955 "is_configured": true, 00:14:57.955 "data_offset": 2048, 00:14:57.955 "data_size": 63488 00:14:57.955 } 00:14:57.955 ] 00:14:57.955 }' 00:14:57.955 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.955 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.955 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.955 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.955 18:55:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.894 "name": "raid_bdev1", 00:14:58.894 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:14:58.894 "strip_size_kb": 64, 00:14:58.894 "state": "online", 00:14:58.894 "raid_level": "raid5f", 00:14:58.894 "superblock": true, 00:14:58.894 "num_base_bdevs": 3, 00:14:58.894 "num_base_bdevs_discovered": 3, 00:14:58.894 "num_base_bdevs_operational": 3, 00:14:58.894 "process": { 00:14:58.894 "type": "rebuild", 00:14:58.894 "target": "spare", 00:14:58.894 "progress": { 00:14:58.894 "blocks": 67584, 00:14:58.894 "percent": 53 00:14:58.894 } 00:14:58.894 }, 00:14:58.894 "base_bdevs_list": [ 00:14:58.894 { 00:14:58.894 "name": "spare", 00:14:58.894 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:14:58.894 "is_configured": true, 00:14:58.894 "data_offset": 2048, 00:14:58.894 "data_size": 63488 00:14:58.894 }, 00:14:58.894 { 00:14:58.894 "name": "BaseBdev2", 00:14:58.894 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:14:58.894 "is_configured": true, 00:14:58.894 "data_offset": 2048, 00:14:58.894 "data_size": 63488 00:14:58.894 }, 00:14:58.894 { 00:14:58.894 "name": "BaseBdev3", 00:14:58.894 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:14:58.894 "is_configured": true, 00:14:58.894 "data_offset": 2048, 00:14:58.894 "data_size": 63488 00:14:58.894 } 00:14:58.894 ] 00:14:58.894 }' 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.894 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.155 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:59.155 18:55:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:00.095 "name": "raid_bdev1", 00:15:00.095 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:00.095 "strip_size_kb": 64, 00:15:00.095 "state": "online", 00:15:00.095 "raid_level": "raid5f", 00:15:00.095 "superblock": true, 00:15:00.095 "num_base_bdevs": 3, 00:15:00.095 "num_base_bdevs_discovered": 3, 00:15:00.095 "num_base_bdevs_operational": 3, 00:15:00.095 "process": { 00:15:00.095 "type": "rebuild", 00:15:00.095 "target": "spare", 00:15:00.095 "progress": { 00:15:00.095 "blocks": 92160, 00:15:00.095 "percent": 72 00:15:00.095 } 00:15:00.095 }, 00:15:00.095 "base_bdevs_list": [ 00:15:00.095 { 00:15:00.095 "name": "spare", 00:15:00.095 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:15:00.095 "is_configured": true, 00:15:00.095 "data_offset": 2048, 00:15:00.095 "data_size": 63488 00:15:00.095 }, 00:15:00.095 { 00:15:00.095 "name": "BaseBdev2", 00:15:00.095 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:00.095 "is_configured": true, 00:15:00.095 "data_offset": 2048, 00:15:00.095 "data_size": 63488 00:15:00.095 }, 00:15:00.095 { 00:15:00.095 "name": "BaseBdev3", 00:15:00.095 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:00.095 "is_configured": true, 00:15:00.095 "data_offset": 2048, 00:15:00.095 "data_size": 63488 00:15:00.095 } 00:15:00.095 ] 00:15:00.095 }' 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:00.095 18:55:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:01.477 "name": "raid_bdev1", 00:15:01.477 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:01.477 "strip_size_kb": 64, 00:15:01.477 "state": "online", 00:15:01.477 "raid_level": "raid5f", 00:15:01.477 "superblock": true, 00:15:01.477 "num_base_bdevs": 3, 00:15:01.477 "num_base_bdevs_discovered": 3, 00:15:01.477 "num_base_bdevs_operational": 3, 00:15:01.477 "process": { 00:15:01.477 "type": "rebuild", 00:15:01.477 "target": "spare", 00:15:01.477 "progress": { 00:15:01.477 "blocks": 114688, 00:15:01.477 "percent": 90 00:15:01.477 } 00:15:01.477 }, 00:15:01.477 "base_bdevs_list": [ 00:15:01.477 { 00:15:01.477 "name": "spare", 00:15:01.477 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:15:01.477 "is_configured": true, 00:15:01.477 "data_offset": 2048, 00:15:01.477 "data_size": 63488 00:15:01.477 }, 00:15:01.477 { 00:15:01.477 "name": "BaseBdev2", 00:15:01.477 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:01.477 "is_configured": true, 00:15:01.477 "data_offset": 2048, 00:15:01.477 "data_size": 63488 00:15:01.477 }, 00:15:01.477 { 00:15:01.477 "name": "BaseBdev3", 00:15:01.477 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:01.477 "is_configured": true, 00:15:01.477 "data_offset": 2048, 00:15:01.477 "data_size": 63488 00:15:01.477 } 00:15:01.477 ] 00:15:01.477 }' 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:01.477 18:55:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:01.736 [2024-11-16 18:55:45.084229] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:01.736 [2024-11-16 18:55:45.084378] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:01.736 [2024-11-16 18:55:45.084503] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.306 "name": "raid_bdev1", 00:15:02.306 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:02.306 "strip_size_kb": 64, 00:15:02.306 "state": "online", 00:15:02.306 "raid_level": "raid5f", 00:15:02.306 "superblock": true, 00:15:02.306 "num_base_bdevs": 3, 00:15:02.306 "num_base_bdevs_discovered": 3, 00:15:02.306 "num_base_bdevs_operational": 3, 00:15:02.306 "base_bdevs_list": [ 00:15:02.306 { 00:15:02.306 "name": "spare", 00:15:02.306 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:15:02.306 "is_configured": true, 00:15:02.306 "data_offset": 2048, 00:15:02.306 "data_size": 63488 00:15:02.306 }, 00:15:02.306 { 00:15:02.306 "name": "BaseBdev2", 00:15:02.306 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:02.306 "is_configured": true, 00:15:02.306 "data_offset": 2048, 00:15:02.306 "data_size": 63488 00:15:02.306 }, 00:15:02.306 { 00:15:02.306 "name": "BaseBdev3", 00:15:02.306 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:02.306 "is_configured": true, 00:15:02.306 "data_offset": 2048, 00:15:02.306 "data_size": 63488 00:15:02.306 } 00:15:02.306 ] 00:15:02.306 }' 00:15:02.306 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:02.567 "name": "raid_bdev1", 00:15:02.567 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:02.567 "strip_size_kb": 64, 00:15:02.567 "state": "online", 00:15:02.567 "raid_level": "raid5f", 00:15:02.567 "superblock": true, 00:15:02.567 "num_base_bdevs": 3, 00:15:02.567 "num_base_bdevs_discovered": 3, 00:15:02.567 "num_base_bdevs_operational": 3, 00:15:02.567 "base_bdevs_list": [ 00:15:02.567 { 00:15:02.567 "name": "spare", 00:15:02.567 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:15:02.567 "is_configured": true, 00:15:02.567 "data_offset": 2048, 00:15:02.567 "data_size": 63488 00:15:02.567 }, 00:15:02.567 { 00:15:02.567 "name": "BaseBdev2", 00:15:02.567 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:02.567 "is_configured": true, 00:15:02.567 "data_offset": 2048, 00:15:02.567 "data_size": 63488 00:15:02.567 }, 00:15:02.567 { 00:15:02.567 "name": "BaseBdev3", 00:15:02.567 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:02.567 "is_configured": true, 00:15:02.567 "data_offset": 2048, 00:15:02.567 "data_size": 63488 00:15:02.567 } 00:15:02.567 ] 00:15:02.567 }' 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.567 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.567 "name": "raid_bdev1", 00:15:02.567 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:02.567 "strip_size_kb": 64, 00:15:02.567 "state": "online", 00:15:02.567 "raid_level": "raid5f", 00:15:02.567 "superblock": true, 00:15:02.567 "num_base_bdevs": 3, 00:15:02.567 "num_base_bdevs_discovered": 3, 00:15:02.567 "num_base_bdevs_operational": 3, 00:15:02.567 "base_bdevs_list": [ 00:15:02.567 { 00:15:02.567 "name": "spare", 00:15:02.567 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:15:02.567 "is_configured": true, 00:15:02.567 "data_offset": 2048, 00:15:02.567 "data_size": 63488 00:15:02.567 }, 00:15:02.567 { 00:15:02.567 "name": "BaseBdev2", 00:15:02.567 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:02.567 "is_configured": true, 00:15:02.567 "data_offset": 2048, 00:15:02.567 "data_size": 63488 00:15:02.567 }, 00:15:02.567 { 00:15:02.567 "name": "BaseBdev3", 00:15:02.567 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:02.567 "is_configured": true, 00:15:02.567 "data_offset": 2048, 00:15:02.567 "data_size": 63488 00:15:02.567 } 00:15:02.567 ] 00:15:02.568 }' 00:15:02.568 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.568 18:55:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.137 [2024-11-16 18:55:46.361091] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:03.137 [2024-11-16 18:55:46.361169] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.137 [2024-11-16 18:55:46.361274] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.137 [2024-11-16 18:55:46.361355] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.137 [2024-11-16 18:55:46.361370] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:03.137 /dev/nbd0 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.137 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.397 1+0 records in 00:15:03.397 1+0 records out 00:15:03.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403503 s, 10.2 MB/s 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:03.397 /dev/nbd1 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:03.397 1+0 records in 00:15:03.397 1+0 records out 00:15:03.397 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296607 s, 13.8 MB/s 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:03.397 18:55:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:03.657 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:03.657 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:03.657 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:03.657 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:03.657 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:03.657 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.657 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:03.916 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.177 [2024-11-16 18:55:47.456946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.177 [2024-11-16 18:55:47.457008] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.177 [2024-11-16 18:55:47.457029] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:04.177 [2024-11-16 18:55:47.457040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.177 [2024-11-16 18:55:47.459216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.177 spare 00:15:04.177 [2024-11-16 18:55:47.459301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.177 [2024-11-16 18:55:47.459396] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:04.177 [2024-11-16 18:55:47.459455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.177 [2024-11-16 18:55:47.459596] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.177 [2024-11-16 18:55:47.459715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.177 [2024-11-16 18:55:47.559601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:04.177 [2024-11-16 18:55:47.559626] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:04.177 [2024-11-16 18:55:47.559918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:04.177 [2024-11-16 18:55:47.565427] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:04.177 [2024-11-16 18:55:47.565446] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:04.177 [2024-11-16 18:55:47.565615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.177 "name": "raid_bdev1", 00:15:04.177 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:04.177 "strip_size_kb": 64, 00:15:04.177 "state": "online", 00:15:04.177 "raid_level": "raid5f", 00:15:04.177 "superblock": true, 00:15:04.177 "num_base_bdevs": 3, 00:15:04.177 "num_base_bdevs_discovered": 3, 00:15:04.177 "num_base_bdevs_operational": 3, 00:15:04.177 "base_bdevs_list": [ 00:15:04.177 { 00:15:04.177 "name": "spare", 00:15:04.177 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:15:04.177 "is_configured": true, 00:15:04.177 "data_offset": 2048, 00:15:04.177 "data_size": 63488 00:15:04.177 }, 00:15:04.177 { 00:15:04.177 "name": "BaseBdev2", 00:15:04.177 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:04.177 "is_configured": true, 00:15:04.177 "data_offset": 2048, 00:15:04.177 "data_size": 63488 00:15:04.177 }, 00:15:04.177 { 00:15:04.177 "name": "BaseBdev3", 00:15:04.177 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:04.177 "is_configured": true, 00:15:04.177 "data_offset": 2048, 00:15:04.177 "data_size": 63488 00:15:04.177 } 00:15:04.177 ] 00:15:04.177 }' 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.177 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:04.748 "name": "raid_bdev1", 00:15:04.748 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:04.748 "strip_size_kb": 64, 00:15:04.748 "state": "online", 00:15:04.748 "raid_level": "raid5f", 00:15:04.748 "superblock": true, 00:15:04.748 "num_base_bdevs": 3, 00:15:04.748 "num_base_bdevs_discovered": 3, 00:15:04.748 "num_base_bdevs_operational": 3, 00:15:04.748 "base_bdevs_list": [ 00:15:04.748 { 00:15:04.748 "name": "spare", 00:15:04.748 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:15:04.748 "is_configured": true, 00:15:04.748 "data_offset": 2048, 00:15:04.748 "data_size": 63488 00:15:04.748 }, 00:15:04.748 { 00:15:04.748 "name": "BaseBdev2", 00:15:04.748 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:04.748 "is_configured": true, 00:15:04.748 "data_offset": 2048, 00:15:04.748 "data_size": 63488 00:15:04.748 }, 00:15:04.748 { 00:15:04.748 "name": "BaseBdev3", 00:15:04.748 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:04.748 "is_configured": true, 00:15:04.748 "data_offset": 2048, 00:15:04.748 "data_size": 63488 00:15:04.748 } 00:15:04.748 ] 00:15:04.748 }' 00:15:04.748 18:55:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.748 [2024-11-16 18:55:48.090763] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.748 "name": "raid_bdev1", 00:15:04.748 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:04.748 "strip_size_kb": 64, 00:15:04.748 "state": "online", 00:15:04.748 "raid_level": "raid5f", 00:15:04.748 "superblock": true, 00:15:04.748 "num_base_bdevs": 3, 00:15:04.748 "num_base_bdevs_discovered": 2, 00:15:04.748 "num_base_bdevs_operational": 2, 00:15:04.748 "base_bdevs_list": [ 00:15:04.748 { 00:15:04.748 "name": null, 00:15:04.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:04.748 "is_configured": false, 00:15:04.748 "data_offset": 0, 00:15:04.748 "data_size": 63488 00:15:04.748 }, 00:15:04.748 { 00:15:04.748 "name": "BaseBdev2", 00:15:04.748 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:04.748 "is_configured": true, 00:15:04.748 "data_offset": 2048, 00:15:04.748 "data_size": 63488 00:15:04.748 }, 00:15:04.748 { 00:15:04.748 "name": "BaseBdev3", 00:15:04.748 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:04.748 "is_configured": true, 00:15:04.748 "data_offset": 2048, 00:15:04.748 "data_size": 63488 00:15:04.748 } 00:15:04.748 ] 00:15:04.748 }' 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.748 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.009 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.009 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.009 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.269 [2024-11-16 18:55:48.482145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.269 [2024-11-16 18:55:48.482387] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:05.269 [2024-11-16 18:55:48.482458] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:05.269 [2024-11-16 18:55:48.482554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.269 [2024-11-16 18:55:48.498662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:05.269 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.269 18:55:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:05.269 [2024-11-16 18:55:48.506267] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.220 "name": "raid_bdev1", 00:15:06.220 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:06.220 "strip_size_kb": 64, 00:15:06.220 "state": "online", 00:15:06.220 "raid_level": "raid5f", 00:15:06.220 "superblock": true, 00:15:06.220 "num_base_bdevs": 3, 00:15:06.220 "num_base_bdevs_discovered": 3, 00:15:06.220 "num_base_bdevs_operational": 3, 00:15:06.220 "process": { 00:15:06.220 "type": "rebuild", 00:15:06.220 "target": "spare", 00:15:06.220 "progress": { 00:15:06.220 "blocks": 20480, 00:15:06.220 "percent": 16 00:15:06.220 } 00:15:06.220 }, 00:15:06.220 "base_bdevs_list": [ 00:15:06.220 { 00:15:06.220 "name": "spare", 00:15:06.220 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:15:06.220 "is_configured": true, 00:15:06.220 "data_offset": 2048, 00:15:06.220 "data_size": 63488 00:15:06.220 }, 00:15:06.220 { 00:15:06.220 "name": "BaseBdev2", 00:15:06.220 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:06.220 "is_configured": true, 00:15:06.220 "data_offset": 2048, 00:15:06.220 "data_size": 63488 00:15:06.220 }, 00:15:06.220 { 00:15:06.220 "name": "BaseBdev3", 00:15:06.220 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:06.220 "is_configured": true, 00:15:06.220 "data_offset": 2048, 00:15:06.220 "data_size": 63488 00:15:06.220 } 00:15:06.220 ] 00:15:06.220 }' 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.220 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.220 [2024-11-16 18:55:49.637462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.489 [2024-11-16 18:55:49.713463] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.489 [2024-11-16 18:55:49.713522] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.489 [2024-11-16 18:55:49.713537] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.489 [2024-11-16 18:55:49.713545] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.489 "name": "raid_bdev1", 00:15:06.489 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:06.489 "strip_size_kb": 64, 00:15:06.489 "state": "online", 00:15:06.489 "raid_level": "raid5f", 00:15:06.489 "superblock": true, 00:15:06.489 "num_base_bdevs": 3, 00:15:06.489 "num_base_bdevs_discovered": 2, 00:15:06.489 "num_base_bdevs_operational": 2, 00:15:06.489 "base_bdevs_list": [ 00:15:06.489 { 00:15:06.489 "name": null, 00:15:06.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.489 "is_configured": false, 00:15:06.489 "data_offset": 0, 00:15:06.489 "data_size": 63488 00:15:06.489 }, 00:15:06.489 { 00:15:06.489 "name": "BaseBdev2", 00:15:06.489 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:06.489 "is_configured": true, 00:15:06.489 "data_offset": 2048, 00:15:06.489 "data_size": 63488 00:15:06.489 }, 00:15:06.489 { 00:15:06.489 "name": "BaseBdev3", 00:15:06.489 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:06.489 "is_configured": true, 00:15:06.489 "data_offset": 2048, 00:15:06.489 "data_size": 63488 00:15:06.489 } 00:15:06.489 ] 00:15:06.489 }' 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.489 18:55:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.759 18:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:06.759 18:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.759 18:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.759 [2024-11-16 18:55:50.178012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:06.759 [2024-11-16 18:55:50.178127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.759 [2024-11-16 18:55:50.178164] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:06.759 [2024-11-16 18:55:50.178197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.759 [2024-11-16 18:55:50.178702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.759 [2024-11-16 18:55:50.178763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:06.759 [2024-11-16 18:55:50.178856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:06.759 [2024-11-16 18:55:50.178870] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:06.759 [2024-11-16 18:55:50.178880] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:06.759 [2024-11-16 18:55:50.178905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.759 [2024-11-16 18:55:50.193634] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:06.759 spare 00:15:06.759 18:55:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.759 18:55:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:06.759 [2024-11-16 18:55:50.200418] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.157 "name": "raid_bdev1", 00:15:08.157 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:08.157 "strip_size_kb": 64, 00:15:08.157 "state": "online", 00:15:08.157 "raid_level": "raid5f", 00:15:08.157 "superblock": true, 00:15:08.157 "num_base_bdevs": 3, 00:15:08.157 "num_base_bdevs_discovered": 3, 00:15:08.157 "num_base_bdevs_operational": 3, 00:15:08.157 "process": { 00:15:08.157 "type": "rebuild", 00:15:08.157 "target": "spare", 00:15:08.157 "progress": { 00:15:08.157 "blocks": 20480, 00:15:08.157 "percent": 16 00:15:08.157 } 00:15:08.157 }, 00:15:08.157 "base_bdevs_list": [ 00:15:08.157 { 00:15:08.157 "name": "spare", 00:15:08.157 "uuid": "df301f5f-0f7d-5319-b5f0-ba15a1829f6f", 00:15:08.157 "is_configured": true, 00:15:08.157 "data_offset": 2048, 00:15:08.157 "data_size": 63488 00:15:08.157 }, 00:15:08.157 { 00:15:08.157 "name": "BaseBdev2", 00:15:08.157 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:08.157 "is_configured": true, 00:15:08.157 "data_offset": 2048, 00:15:08.157 "data_size": 63488 00:15:08.157 }, 00:15:08.157 { 00:15:08.157 "name": "BaseBdev3", 00:15:08.157 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:08.157 "is_configured": true, 00:15:08.157 "data_offset": 2048, 00:15:08.157 "data_size": 63488 00:15:08.157 } 00:15:08.157 ] 00:15:08.157 }' 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.157 [2024-11-16 18:55:51.355578] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.157 [2024-11-16 18:55:51.407461] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:08.157 [2024-11-16 18:55:51.407511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.157 [2024-11-16 18:55:51.407545] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.157 [2024-11-16 18:55:51.407552] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:08.157 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.158 "name": "raid_bdev1", 00:15:08.158 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:08.158 "strip_size_kb": 64, 00:15:08.158 "state": "online", 00:15:08.158 "raid_level": "raid5f", 00:15:08.158 "superblock": true, 00:15:08.158 "num_base_bdevs": 3, 00:15:08.158 "num_base_bdevs_discovered": 2, 00:15:08.158 "num_base_bdevs_operational": 2, 00:15:08.158 "base_bdevs_list": [ 00:15:08.158 { 00:15:08.158 "name": null, 00:15:08.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.158 "is_configured": false, 00:15:08.158 "data_offset": 0, 00:15:08.158 "data_size": 63488 00:15:08.158 }, 00:15:08.158 { 00:15:08.158 "name": "BaseBdev2", 00:15:08.158 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:08.158 "is_configured": true, 00:15:08.158 "data_offset": 2048, 00:15:08.158 "data_size": 63488 00:15:08.158 }, 00:15:08.158 { 00:15:08.158 "name": "BaseBdev3", 00:15:08.158 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:08.158 "is_configured": true, 00:15:08.158 "data_offset": 2048, 00:15:08.158 "data_size": 63488 00:15:08.158 } 00:15:08.158 ] 00:15:08.158 }' 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.158 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.417 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.417 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.417 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:08.417 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:08.417 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.417 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.417 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.417 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.417 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.676 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.676 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.676 "name": "raid_bdev1", 00:15:08.676 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:08.676 "strip_size_kb": 64, 00:15:08.676 "state": "online", 00:15:08.676 "raid_level": "raid5f", 00:15:08.676 "superblock": true, 00:15:08.676 "num_base_bdevs": 3, 00:15:08.676 "num_base_bdevs_discovered": 2, 00:15:08.676 "num_base_bdevs_operational": 2, 00:15:08.676 "base_bdevs_list": [ 00:15:08.676 { 00:15:08.676 "name": null, 00:15:08.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.676 "is_configured": false, 00:15:08.676 "data_offset": 0, 00:15:08.676 "data_size": 63488 00:15:08.676 }, 00:15:08.676 { 00:15:08.676 "name": "BaseBdev2", 00:15:08.676 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:08.676 "is_configured": true, 00:15:08.676 "data_offset": 2048, 00:15:08.676 "data_size": 63488 00:15:08.676 }, 00:15:08.676 { 00:15:08.676 "name": "BaseBdev3", 00:15:08.676 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:08.676 "is_configured": true, 00:15:08.676 "data_offset": 2048, 00:15:08.676 "data_size": 63488 00:15:08.676 } 00:15:08.676 ] 00:15:08.676 }' 00:15:08.676 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.676 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:08.676 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.676 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:08.676 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:08.676 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.676 18:55:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.676 18:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.676 18:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:08.676 18:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.676 18:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.676 [2024-11-16 18:55:52.015526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:08.676 [2024-11-16 18:55:52.015582] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:08.676 [2024-11-16 18:55:52.015622] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:08.676 [2024-11-16 18:55:52.015631] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:08.676 [2024-11-16 18:55:52.016122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:08.676 [2024-11-16 18:55:52.016155] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:08.676 [2024-11-16 18:55:52.016234] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:08.676 [2024-11-16 18:55:52.016247] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:08.676 [2024-11-16 18:55:52.016272] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:08.676 [2024-11-16 18:55:52.016283] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:08.676 BaseBdev1 00:15:08.676 18:55:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.676 18:55:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.615 "name": "raid_bdev1", 00:15:09.615 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:09.615 "strip_size_kb": 64, 00:15:09.615 "state": "online", 00:15:09.615 "raid_level": "raid5f", 00:15:09.615 "superblock": true, 00:15:09.615 "num_base_bdevs": 3, 00:15:09.615 "num_base_bdevs_discovered": 2, 00:15:09.615 "num_base_bdevs_operational": 2, 00:15:09.615 "base_bdevs_list": [ 00:15:09.615 { 00:15:09.615 "name": null, 00:15:09.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.615 "is_configured": false, 00:15:09.615 "data_offset": 0, 00:15:09.615 "data_size": 63488 00:15:09.615 }, 00:15:09.615 { 00:15:09.615 "name": "BaseBdev2", 00:15:09.615 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:09.615 "is_configured": true, 00:15:09.615 "data_offset": 2048, 00:15:09.615 "data_size": 63488 00:15:09.615 }, 00:15:09.615 { 00:15:09.615 "name": "BaseBdev3", 00:15:09.615 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:09.615 "is_configured": true, 00:15:09.615 "data_offset": 2048, 00:15:09.615 "data_size": 63488 00:15:09.615 } 00:15:09.615 ] 00:15:09.615 }' 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.615 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.187 "name": "raid_bdev1", 00:15:10.187 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:10.187 "strip_size_kb": 64, 00:15:10.187 "state": "online", 00:15:10.187 "raid_level": "raid5f", 00:15:10.187 "superblock": true, 00:15:10.187 "num_base_bdevs": 3, 00:15:10.187 "num_base_bdevs_discovered": 2, 00:15:10.187 "num_base_bdevs_operational": 2, 00:15:10.187 "base_bdevs_list": [ 00:15:10.187 { 00:15:10.187 "name": null, 00:15:10.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.187 "is_configured": false, 00:15:10.187 "data_offset": 0, 00:15:10.187 "data_size": 63488 00:15:10.187 }, 00:15:10.187 { 00:15:10.187 "name": "BaseBdev2", 00:15:10.187 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:10.187 "is_configured": true, 00:15:10.187 "data_offset": 2048, 00:15:10.187 "data_size": 63488 00:15:10.187 }, 00:15:10.187 { 00:15:10.187 "name": "BaseBdev3", 00:15:10.187 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:10.187 "is_configured": true, 00:15:10.187 "data_offset": 2048, 00:15:10.187 "data_size": 63488 00:15:10.187 } 00:15:10.187 ] 00:15:10.187 }' 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.187 [2024-11-16 18:55:53.508972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:10.187 [2024-11-16 18:55:53.509125] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:10.187 [2024-11-16 18:55:53.509140] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:10.187 request: 00:15:10.187 { 00:15:10.187 "base_bdev": "BaseBdev1", 00:15:10.187 "raid_bdev": "raid_bdev1", 00:15:10.187 "method": "bdev_raid_add_base_bdev", 00:15:10.187 "req_id": 1 00:15:10.187 } 00:15:10.187 Got JSON-RPC error response 00:15:10.187 response: 00:15:10.187 { 00:15:10.187 "code": -22, 00:15:10.187 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:10.187 } 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:10.187 18:55:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.126 "name": "raid_bdev1", 00:15:11.126 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:11.126 "strip_size_kb": 64, 00:15:11.126 "state": "online", 00:15:11.126 "raid_level": "raid5f", 00:15:11.126 "superblock": true, 00:15:11.126 "num_base_bdevs": 3, 00:15:11.126 "num_base_bdevs_discovered": 2, 00:15:11.126 "num_base_bdevs_operational": 2, 00:15:11.126 "base_bdevs_list": [ 00:15:11.126 { 00:15:11.126 "name": null, 00:15:11.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.126 "is_configured": false, 00:15:11.126 "data_offset": 0, 00:15:11.126 "data_size": 63488 00:15:11.126 }, 00:15:11.126 { 00:15:11.126 "name": "BaseBdev2", 00:15:11.126 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:11.126 "is_configured": true, 00:15:11.126 "data_offset": 2048, 00:15:11.126 "data_size": 63488 00:15:11.126 }, 00:15:11.126 { 00:15:11.126 "name": "BaseBdev3", 00:15:11.126 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:11.126 "is_configured": true, 00:15:11.126 "data_offset": 2048, 00:15:11.126 "data_size": 63488 00:15:11.126 } 00:15:11.126 ] 00:15:11.126 }' 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.126 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.696 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.696 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.696 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.696 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.696 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.696 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.696 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.696 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.696 18:55:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.696 "name": "raid_bdev1", 00:15:11.696 "uuid": "290151a7-21ea-401c-bd55-ff8e522783b3", 00:15:11.696 "strip_size_kb": 64, 00:15:11.696 "state": "online", 00:15:11.696 "raid_level": "raid5f", 00:15:11.696 "superblock": true, 00:15:11.696 "num_base_bdevs": 3, 00:15:11.696 "num_base_bdevs_discovered": 2, 00:15:11.696 "num_base_bdevs_operational": 2, 00:15:11.696 "base_bdevs_list": [ 00:15:11.696 { 00:15:11.696 "name": null, 00:15:11.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.696 "is_configured": false, 00:15:11.696 "data_offset": 0, 00:15:11.696 "data_size": 63488 00:15:11.696 }, 00:15:11.696 { 00:15:11.696 "name": "BaseBdev2", 00:15:11.696 "uuid": "66b104c1-869e-5886-9e49-8ce4513f421f", 00:15:11.696 "is_configured": true, 00:15:11.696 "data_offset": 2048, 00:15:11.696 "data_size": 63488 00:15:11.696 }, 00:15:11.696 { 00:15:11.696 "name": "BaseBdev3", 00:15:11.696 "uuid": "bfa947e5-810d-5264-baeb-61a70c5e5fe5", 00:15:11.696 "is_configured": true, 00:15:11.696 "data_offset": 2048, 00:15:11.696 "data_size": 63488 00:15:11.696 } 00:15:11.696 ] 00:15:11.696 }' 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81706 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81706 ']' 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81706 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81706 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:11.696 killing process with pid 81706 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81706' 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81706 00:15:11.696 Received shutdown signal, test time was about 60.000000 seconds 00:15:11.696 00:15:11.696 Latency(us) 00:15:11.696 [2024-11-16T18:55:55.168Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.696 [2024-11-16T18:55:55.168Z] =================================================================================================================== 00:15:11.696 [2024-11-16T18:55:55.168Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:11.696 [2024-11-16 18:55:55.159029] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:11.696 [2024-11-16 18:55:55.159156] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.696 18:55:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81706 00:15:11.696 [2024-11-16 18:55:55.159223] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.696 [2024-11-16 18:55:55.159235] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:12.266 [2024-11-16 18:55:55.527734] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:13.205 18:55:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:13.205 ************************************ 00:15:13.205 END TEST raid5f_rebuild_test_sb 00:15:13.205 ************************************ 00:15:13.205 00:15:13.205 real 0m22.462s 00:15:13.205 user 0m28.506s 00:15:13.205 sys 0m2.621s 00:15:13.206 18:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:13.206 18:55:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.206 18:55:56 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:13.206 18:55:56 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:15:13.206 18:55:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:13.206 18:55:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:13.206 18:55:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:13.206 ************************************ 00:15:13.206 START TEST raid5f_state_function_test 00:15:13.206 ************************************ 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82444 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82444' 00:15:13.206 Process raid pid: 82444 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82444 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82444 ']' 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:13.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:13.206 18:55:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.466 [2024-11-16 18:55:56.723117] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:13.466 [2024-11-16 18:55:56.723313] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:13.466 [2024-11-16 18:55:56.892131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.726 [2024-11-16 18:55:56.998413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.726 [2024-11-16 18:55:57.192845] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:13.726 [2024-11-16 18:55:57.192955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.296 [2024-11-16 18:55:57.546507] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.296 [2024-11-16 18:55:57.546558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.296 [2024-11-16 18:55:57.546569] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.296 [2024-11-16 18:55:57.546578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.296 [2024-11-16 18:55:57.546584] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.296 [2024-11-16 18:55:57.546592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.296 [2024-11-16 18:55:57.546598] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:14.296 [2024-11-16 18:55:57.546606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.296 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.296 "name": "Existed_Raid", 00:15:14.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.296 "strip_size_kb": 64, 00:15:14.296 "state": "configuring", 00:15:14.297 "raid_level": "raid5f", 00:15:14.297 "superblock": false, 00:15:14.297 "num_base_bdevs": 4, 00:15:14.297 "num_base_bdevs_discovered": 0, 00:15:14.297 "num_base_bdevs_operational": 4, 00:15:14.297 "base_bdevs_list": [ 00:15:14.297 { 00:15:14.297 "name": "BaseBdev1", 00:15:14.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.297 "is_configured": false, 00:15:14.297 "data_offset": 0, 00:15:14.297 "data_size": 0 00:15:14.297 }, 00:15:14.297 { 00:15:14.297 "name": "BaseBdev2", 00:15:14.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.297 "is_configured": false, 00:15:14.297 "data_offset": 0, 00:15:14.297 "data_size": 0 00:15:14.297 }, 00:15:14.297 { 00:15:14.297 "name": "BaseBdev3", 00:15:14.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.297 "is_configured": false, 00:15:14.297 "data_offset": 0, 00:15:14.297 "data_size": 0 00:15:14.297 }, 00:15:14.297 { 00:15:14.297 "name": "BaseBdev4", 00:15:14.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.297 "is_configured": false, 00:15:14.297 "data_offset": 0, 00:15:14.297 "data_size": 0 00:15:14.297 } 00:15:14.297 ] 00:15:14.297 }' 00:15:14.297 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.297 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.557 [2024-11-16 18:55:57.933777] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.557 [2024-11-16 18:55:57.933853] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.557 [2024-11-16 18:55:57.941784] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:14.557 [2024-11-16 18:55:57.941860] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:14.557 [2024-11-16 18:55:57.941887] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:14.557 [2024-11-16 18:55:57.941908] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:14.557 [2024-11-16 18:55:57.941926] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:14.557 [2024-11-16 18:55:57.941946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:14.557 [2024-11-16 18:55:57.941963] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:14.557 [2024-11-16 18:55:57.941984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.557 [2024-11-16 18:55:57.985154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.557 BaseBdev1 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:14.557 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:14.558 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:14.558 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:14.558 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.558 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.558 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.558 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:14.558 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.558 18:55:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.558 [ 00:15:14.558 { 00:15:14.558 "name": "BaseBdev1", 00:15:14.558 "aliases": [ 00:15:14.558 "edd5e35e-1ea9-4c06-9933-e07f2a584128" 00:15:14.558 ], 00:15:14.558 "product_name": "Malloc disk", 00:15:14.558 "block_size": 512, 00:15:14.558 "num_blocks": 65536, 00:15:14.558 "uuid": "edd5e35e-1ea9-4c06-9933-e07f2a584128", 00:15:14.558 "assigned_rate_limits": { 00:15:14.558 "rw_ios_per_sec": 0, 00:15:14.558 "rw_mbytes_per_sec": 0, 00:15:14.558 "r_mbytes_per_sec": 0, 00:15:14.558 "w_mbytes_per_sec": 0 00:15:14.558 }, 00:15:14.558 "claimed": true, 00:15:14.558 "claim_type": "exclusive_write", 00:15:14.558 "zoned": false, 00:15:14.558 "supported_io_types": { 00:15:14.558 "read": true, 00:15:14.558 "write": true, 00:15:14.558 "unmap": true, 00:15:14.558 "flush": true, 00:15:14.558 "reset": true, 00:15:14.558 "nvme_admin": false, 00:15:14.558 "nvme_io": false, 00:15:14.558 "nvme_io_md": false, 00:15:14.558 "write_zeroes": true, 00:15:14.558 "zcopy": true, 00:15:14.558 "get_zone_info": false, 00:15:14.558 "zone_management": false, 00:15:14.558 "zone_append": false, 00:15:14.558 "compare": false, 00:15:14.558 "compare_and_write": false, 00:15:14.558 "abort": true, 00:15:14.558 "seek_hole": false, 00:15:14.558 "seek_data": false, 00:15:14.558 "copy": true, 00:15:14.558 "nvme_iov_md": false 00:15:14.558 }, 00:15:14.558 "memory_domains": [ 00:15:14.558 { 00:15:14.558 "dma_device_id": "system", 00:15:14.558 "dma_device_type": 1 00:15:14.558 }, 00:15:14.558 { 00:15:14.558 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.558 "dma_device_type": 2 00:15:14.558 } 00:15:14.558 ], 00:15:14.558 "driver_specific": {} 00:15:14.558 } 00:15:14.558 ] 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:14.558 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:14.818 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:14.818 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:14.818 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.818 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.818 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.818 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:14.818 "name": "Existed_Raid", 00:15:14.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.818 "strip_size_kb": 64, 00:15:14.818 "state": "configuring", 00:15:14.818 "raid_level": "raid5f", 00:15:14.818 "superblock": false, 00:15:14.818 "num_base_bdevs": 4, 00:15:14.818 "num_base_bdevs_discovered": 1, 00:15:14.818 "num_base_bdevs_operational": 4, 00:15:14.818 "base_bdevs_list": [ 00:15:14.818 { 00:15:14.818 "name": "BaseBdev1", 00:15:14.818 "uuid": "edd5e35e-1ea9-4c06-9933-e07f2a584128", 00:15:14.818 "is_configured": true, 00:15:14.818 "data_offset": 0, 00:15:14.818 "data_size": 65536 00:15:14.818 }, 00:15:14.818 { 00:15:14.818 "name": "BaseBdev2", 00:15:14.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.818 "is_configured": false, 00:15:14.818 "data_offset": 0, 00:15:14.818 "data_size": 0 00:15:14.818 }, 00:15:14.818 { 00:15:14.818 "name": "BaseBdev3", 00:15:14.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.818 "is_configured": false, 00:15:14.818 "data_offset": 0, 00:15:14.818 "data_size": 0 00:15:14.818 }, 00:15:14.818 { 00:15:14.818 "name": "BaseBdev4", 00:15:14.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:14.818 "is_configured": false, 00:15:14.818 "data_offset": 0, 00:15:14.818 "data_size": 0 00:15:14.818 } 00:15:14.818 ] 00:15:14.818 }' 00:15:14.818 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:14.818 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.078 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:15.078 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.078 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.078 [2024-11-16 18:55:58.448439] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:15.078 [2024-11-16 18:55:58.448521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:15.078 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.078 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:15.078 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.078 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.078 [2024-11-16 18:55:58.460479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:15.078 [2024-11-16 18:55:58.462265] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:15.078 [2024-11-16 18:55:58.462306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:15.078 [2024-11-16 18:55:58.462316] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:15.079 [2024-11-16 18:55:58.462327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:15.079 [2024-11-16 18:55:58.462333] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:15.079 [2024-11-16 18:55:58.462341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.079 "name": "Existed_Raid", 00:15:15.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.079 "strip_size_kb": 64, 00:15:15.079 "state": "configuring", 00:15:15.079 "raid_level": "raid5f", 00:15:15.079 "superblock": false, 00:15:15.079 "num_base_bdevs": 4, 00:15:15.079 "num_base_bdevs_discovered": 1, 00:15:15.079 "num_base_bdevs_operational": 4, 00:15:15.079 "base_bdevs_list": [ 00:15:15.079 { 00:15:15.079 "name": "BaseBdev1", 00:15:15.079 "uuid": "edd5e35e-1ea9-4c06-9933-e07f2a584128", 00:15:15.079 "is_configured": true, 00:15:15.079 "data_offset": 0, 00:15:15.079 "data_size": 65536 00:15:15.079 }, 00:15:15.079 { 00:15:15.079 "name": "BaseBdev2", 00:15:15.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.079 "is_configured": false, 00:15:15.079 "data_offset": 0, 00:15:15.079 "data_size": 0 00:15:15.079 }, 00:15:15.079 { 00:15:15.079 "name": "BaseBdev3", 00:15:15.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.079 "is_configured": false, 00:15:15.079 "data_offset": 0, 00:15:15.079 "data_size": 0 00:15:15.079 }, 00:15:15.079 { 00:15:15.079 "name": "BaseBdev4", 00:15:15.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.079 "is_configured": false, 00:15:15.079 "data_offset": 0, 00:15:15.079 "data_size": 0 00:15:15.079 } 00:15:15.079 ] 00:15:15.079 }' 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.079 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.665 [2024-11-16 18:55:58.949199] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:15.665 BaseBdev2 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.665 [ 00:15:15.665 { 00:15:15.665 "name": "BaseBdev2", 00:15:15.665 "aliases": [ 00:15:15.665 "cfff2cd6-c50d-4ba9-98c6-b07f3e2279a1" 00:15:15.665 ], 00:15:15.665 "product_name": "Malloc disk", 00:15:15.665 "block_size": 512, 00:15:15.665 "num_blocks": 65536, 00:15:15.665 "uuid": "cfff2cd6-c50d-4ba9-98c6-b07f3e2279a1", 00:15:15.665 "assigned_rate_limits": { 00:15:15.665 "rw_ios_per_sec": 0, 00:15:15.665 "rw_mbytes_per_sec": 0, 00:15:15.665 "r_mbytes_per_sec": 0, 00:15:15.665 "w_mbytes_per_sec": 0 00:15:15.665 }, 00:15:15.665 "claimed": true, 00:15:15.665 "claim_type": "exclusive_write", 00:15:15.665 "zoned": false, 00:15:15.665 "supported_io_types": { 00:15:15.665 "read": true, 00:15:15.665 "write": true, 00:15:15.665 "unmap": true, 00:15:15.665 "flush": true, 00:15:15.665 "reset": true, 00:15:15.665 "nvme_admin": false, 00:15:15.665 "nvme_io": false, 00:15:15.665 "nvme_io_md": false, 00:15:15.665 "write_zeroes": true, 00:15:15.665 "zcopy": true, 00:15:15.665 "get_zone_info": false, 00:15:15.665 "zone_management": false, 00:15:15.665 "zone_append": false, 00:15:15.665 "compare": false, 00:15:15.665 "compare_and_write": false, 00:15:15.665 "abort": true, 00:15:15.665 "seek_hole": false, 00:15:15.665 "seek_data": false, 00:15:15.665 "copy": true, 00:15:15.665 "nvme_iov_md": false 00:15:15.665 }, 00:15:15.665 "memory_domains": [ 00:15:15.665 { 00:15:15.665 "dma_device_id": "system", 00:15:15.665 "dma_device_type": 1 00:15:15.665 }, 00:15:15.665 { 00:15:15.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:15.665 "dma_device_type": 2 00:15:15.665 } 00:15:15.665 ], 00:15:15.665 "driver_specific": {} 00:15:15.665 } 00:15:15.665 ] 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.665 18:55:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.665 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.665 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.665 "name": "Existed_Raid", 00:15:15.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.665 "strip_size_kb": 64, 00:15:15.665 "state": "configuring", 00:15:15.665 "raid_level": "raid5f", 00:15:15.665 "superblock": false, 00:15:15.665 "num_base_bdevs": 4, 00:15:15.665 "num_base_bdevs_discovered": 2, 00:15:15.665 "num_base_bdevs_operational": 4, 00:15:15.665 "base_bdevs_list": [ 00:15:15.665 { 00:15:15.665 "name": "BaseBdev1", 00:15:15.665 "uuid": "edd5e35e-1ea9-4c06-9933-e07f2a584128", 00:15:15.666 "is_configured": true, 00:15:15.666 "data_offset": 0, 00:15:15.666 "data_size": 65536 00:15:15.666 }, 00:15:15.666 { 00:15:15.666 "name": "BaseBdev2", 00:15:15.666 "uuid": "cfff2cd6-c50d-4ba9-98c6-b07f3e2279a1", 00:15:15.666 "is_configured": true, 00:15:15.666 "data_offset": 0, 00:15:15.666 "data_size": 65536 00:15:15.666 }, 00:15:15.666 { 00:15:15.666 "name": "BaseBdev3", 00:15:15.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.666 "is_configured": false, 00:15:15.666 "data_offset": 0, 00:15:15.666 "data_size": 0 00:15:15.666 }, 00:15:15.666 { 00:15:15.666 "name": "BaseBdev4", 00:15:15.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:15.666 "is_configured": false, 00:15:15.666 "data_offset": 0, 00:15:15.666 "data_size": 0 00:15:15.666 } 00:15:15.666 ] 00:15:15.666 }' 00:15:15.666 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.666 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.939 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:15.939 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.939 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.200 BaseBdev3 00:15:16.200 [2024-11-16 18:55:59.448168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.200 [ 00:15:16.200 { 00:15:16.200 "name": "BaseBdev3", 00:15:16.200 "aliases": [ 00:15:16.200 "6ff7a4a5-d8eb-4a4f-84c5-e68ec023183c" 00:15:16.200 ], 00:15:16.200 "product_name": "Malloc disk", 00:15:16.200 "block_size": 512, 00:15:16.200 "num_blocks": 65536, 00:15:16.200 "uuid": "6ff7a4a5-d8eb-4a4f-84c5-e68ec023183c", 00:15:16.200 "assigned_rate_limits": { 00:15:16.200 "rw_ios_per_sec": 0, 00:15:16.200 "rw_mbytes_per_sec": 0, 00:15:16.200 "r_mbytes_per_sec": 0, 00:15:16.200 "w_mbytes_per_sec": 0 00:15:16.200 }, 00:15:16.200 "claimed": true, 00:15:16.200 "claim_type": "exclusive_write", 00:15:16.200 "zoned": false, 00:15:16.200 "supported_io_types": { 00:15:16.200 "read": true, 00:15:16.200 "write": true, 00:15:16.200 "unmap": true, 00:15:16.200 "flush": true, 00:15:16.200 "reset": true, 00:15:16.200 "nvme_admin": false, 00:15:16.200 "nvme_io": false, 00:15:16.200 "nvme_io_md": false, 00:15:16.200 "write_zeroes": true, 00:15:16.200 "zcopy": true, 00:15:16.200 "get_zone_info": false, 00:15:16.200 "zone_management": false, 00:15:16.200 "zone_append": false, 00:15:16.200 "compare": false, 00:15:16.200 "compare_and_write": false, 00:15:16.200 "abort": true, 00:15:16.200 "seek_hole": false, 00:15:16.200 "seek_data": false, 00:15:16.200 "copy": true, 00:15:16.200 "nvme_iov_md": false 00:15:16.200 }, 00:15:16.200 "memory_domains": [ 00:15:16.200 { 00:15:16.200 "dma_device_id": "system", 00:15:16.200 "dma_device_type": 1 00:15:16.200 }, 00:15:16.200 { 00:15:16.200 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.200 "dma_device_type": 2 00:15:16.200 } 00:15:16.200 ], 00:15:16.200 "driver_specific": {} 00:15:16.200 } 00:15:16.200 ] 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.200 "name": "Existed_Raid", 00:15:16.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.200 "strip_size_kb": 64, 00:15:16.200 "state": "configuring", 00:15:16.200 "raid_level": "raid5f", 00:15:16.200 "superblock": false, 00:15:16.200 "num_base_bdevs": 4, 00:15:16.200 "num_base_bdevs_discovered": 3, 00:15:16.200 "num_base_bdevs_operational": 4, 00:15:16.200 "base_bdevs_list": [ 00:15:16.200 { 00:15:16.200 "name": "BaseBdev1", 00:15:16.200 "uuid": "edd5e35e-1ea9-4c06-9933-e07f2a584128", 00:15:16.200 "is_configured": true, 00:15:16.200 "data_offset": 0, 00:15:16.200 "data_size": 65536 00:15:16.200 }, 00:15:16.200 { 00:15:16.200 "name": "BaseBdev2", 00:15:16.200 "uuid": "cfff2cd6-c50d-4ba9-98c6-b07f3e2279a1", 00:15:16.200 "is_configured": true, 00:15:16.200 "data_offset": 0, 00:15:16.200 "data_size": 65536 00:15:16.200 }, 00:15:16.200 { 00:15:16.200 "name": "BaseBdev3", 00:15:16.200 "uuid": "6ff7a4a5-d8eb-4a4f-84c5-e68ec023183c", 00:15:16.200 "is_configured": true, 00:15:16.200 "data_offset": 0, 00:15:16.200 "data_size": 65536 00:15:16.200 }, 00:15:16.200 { 00:15:16.200 "name": "BaseBdev4", 00:15:16.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.200 "is_configured": false, 00:15:16.200 "data_offset": 0, 00:15:16.200 "data_size": 0 00:15:16.200 } 00:15:16.200 ] 00:15:16.200 }' 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.200 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.461 [2024-11-16 18:55:59.920885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:16.461 [2024-11-16 18:55:59.920950] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:16.461 [2024-11-16 18:55:59.920959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:16.461 [2024-11-16 18:55:59.921196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:16.461 [2024-11-16 18:55:59.927582] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:16.461 [2024-11-16 18:55:59.927604] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:16.461 [2024-11-16 18:55:59.927889] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.461 BaseBdev4 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.461 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.722 [ 00:15:16.722 { 00:15:16.722 "name": "BaseBdev4", 00:15:16.722 "aliases": [ 00:15:16.722 "4969722d-8c01-4edd-a73f-28da8936e06d" 00:15:16.722 ], 00:15:16.722 "product_name": "Malloc disk", 00:15:16.722 "block_size": 512, 00:15:16.722 "num_blocks": 65536, 00:15:16.722 "uuid": "4969722d-8c01-4edd-a73f-28da8936e06d", 00:15:16.722 "assigned_rate_limits": { 00:15:16.722 "rw_ios_per_sec": 0, 00:15:16.722 "rw_mbytes_per_sec": 0, 00:15:16.722 "r_mbytes_per_sec": 0, 00:15:16.722 "w_mbytes_per_sec": 0 00:15:16.722 }, 00:15:16.722 "claimed": true, 00:15:16.722 "claim_type": "exclusive_write", 00:15:16.722 "zoned": false, 00:15:16.722 "supported_io_types": { 00:15:16.722 "read": true, 00:15:16.722 "write": true, 00:15:16.722 "unmap": true, 00:15:16.722 "flush": true, 00:15:16.722 "reset": true, 00:15:16.722 "nvme_admin": false, 00:15:16.722 "nvme_io": false, 00:15:16.722 "nvme_io_md": false, 00:15:16.722 "write_zeroes": true, 00:15:16.722 "zcopy": true, 00:15:16.722 "get_zone_info": false, 00:15:16.722 "zone_management": false, 00:15:16.722 "zone_append": false, 00:15:16.722 "compare": false, 00:15:16.722 "compare_and_write": false, 00:15:16.722 "abort": true, 00:15:16.722 "seek_hole": false, 00:15:16.722 "seek_data": false, 00:15:16.722 "copy": true, 00:15:16.722 "nvme_iov_md": false 00:15:16.722 }, 00:15:16.722 "memory_domains": [ 00:15:16.722 { 00:15:16.722 "dma_device_id": "system", 00:15:16.722 "dma_device_type": 1 00:15:16.722 }, 00:15:16.722 { 00:15:16.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:16.722 "dma_device_type": 2 00:15:16.722 } 00:15:16.722 ], 00:15:16.722 "driver_specific": {} 00:15:16.722 } 00:15:16.722 ] 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:16.722 18:55:59 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.722 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.722 "name": "Existed_Raid", 00:15:16.722 "uuid": "b34450f8-d33e-49c1-b563-14e5bcededc3", 00:15:16.722 "strip_size_kb": 64, 00:15:16.722 "state": "online", 00:15:16.722 "raid_level": "raid5f", 00:15:16.722 "superblock": false, 00:15:16.722 "num_base_bdevs": 4, 00:15:16.722 "num_base_bdevs_discovered": 4, 00:15:16.722 "num_base_bdevs_operational": 4, 00:15:16.722 "base_bdevs_list": [ 00:15:16.722 { 00:15:16.722 "name": "BaseBdev1", 00:15:16.722 "uuid": "edd5e35e-1ea9-4c06-9933-e07f2a584128", 00:15:16.722 "is_configured": true, 00:15:16.722 "data_offset": 0, 00:15:16.722 "data_size": 65536 00:15:16.722 }, 00:15:16.722 { 00:15:16.722 "name": "BaseBdev2", 00:15:16.722 "uuid": "cfff2cd6-c50d-4ba9-98c6-b07f3e2279a1", 00:15:16.722 "is_configured": true, 00:15:16.722 "data_offset": 0, 00:15:16.723 "data_size": 65536 00:15:16.723 }, 00:15:16.723 { 00:15:16.723 "name": "BaseBdev3", 00:15:16.723 "uuid": "6ff7a4a5-d8eb-4a4f-84c5-e68ec023183c", 00:15:16.723 "is_configured": true, 00:15:16.723 "data_offset": 0, 00:15:16.723 "data_size": 65536 00:15:16.723 }, 00:15:16.723 { 00:15:16.723 "name": "BaseBdev4", 00:15:16.723 "uuid": "4969722d-8c01-4edd-a73f-28da8936e06d", 00:15:16.723 "is_configured": true, 00:15:16.723 "data_offset": 0, 00:15:16.723 "data_size": 65536 00:15:16.723 } 00:15:16.723 ] 00:15:16.723 }' 00:15:16.723 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.723 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.983 [2024-11-16 18:56:00.375604] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:16.983 "name": "Existed_Raid", 00:15:16.983 "aliases": [ 00:15:16.983 "b34450f8-d33e-49c1-b563-14e5bcededc3" 00:15:16.983 ], 00:15:16.983 "product_name": "Raid Volume", 00:15:16.983 "block_size": 512, 00:15:16.983 "num_blocks": 196608, 00:15:16.983 "uuid": "b34450f8-d33e-49c1-b563-14e5bcededc3", 00:15:16.983 "assigned_rate_limits": { 00:15:16.983 "rw_ios_per_sec": 0, 00:15:16.983 "rw_mbytes_per_sec": 0, 00:15:16.983 "r_mbytes_per_sec": 0, 00:15:16.983 "w_mbytes_per_sec": 0 00:15:16.983 }, 00:15:16.983 "claimed": false, 00:15:16.983 "zoned": false, 00:15:16.983 "supported_io_types": { 00:15:16.983 "read": true, 00:15:16.983 "write": true, 00:15:16.983 "unmap": false, 00:15:16.983 "flush": false, 00:15:16.983 "reset": true, 00:15:16.983 "nvme_admin": false, 00:15:16.983 "nvme_io": false, 00:15:16.983 "nvme_io_md": false, 00:15:16.983 "write_zeroes": true, 00:15:16.983 "zcopy": false, 00:15:16.983 "get_zone_info": false, 00:15:16.983 "zone_management": false, 00:15:16.983 "zone_append": false, 00:15:16.983 "compare": false, 00:15:16.983 "compare_and_write": false, 00:15:16.983 "abort": false, 00:15:16.983 "seek_hole": false, 00:15:16.983 "seek_data": false, 00:15:16.983 "copy": false, 00:15:16.983 "nvme_iov_md": false 00:15:16.983 }, 00:15:16.983 "driver_specific": { 00:15:16.983 "raid": { 00:15:16.983 "uuid": "b34450f8-d33e-49c1-b563-14e5bcededc3", 00:15:16.983 "strip_size_kb": 64, 00:15:16.983 "state": "online", 00:15:16.983 "raid_level": "raid5f", 00:15:16.983 "superblock": false, 00:15:16.983 "num_base_bdevs": 4, 00:15:16.983 "num_base_bdevs_discovered": 4, 00:15:16.983 "num_base_bdevs_operational": 4, 00:15:16.983 "base_bdevs_list": [ 00:15:16.983 { 00:15:16.983 "name": "BaseBdev1", 00:15:16.983 "uuid": "edd5e35e-1ea9-4c06-9933-e07f2a584128", 00:15:16.983 "is_configured": true, 00:15:16.983 "data_offset": 0, 00:15:16.983 "data_size": 65536 00:15:16.983 }, 00:15:16.983 { 00:15:16.983 "name": "BaseBdev2", 00:15:16.983 "uuid": "cfff2cd6-c50d-4ba9-98c6-b07f3e2279a1", 00:15:16.983 "is_configured": true, 00:15:16.983 "data_offset": 0, 00:15:16.983 "data_size": 65536 00:15:16.983 }, 00:15:16.983 { 00:15:16.983 "name": "BaseBdev3", 00:15:16.983 "uuid": "6ff7a4a5-d8eb-4a4f-84c5-e68ec023183c", 00:15:16.983 "is_configured": true, 00:15:16.983 "data_offset": 0, 00:15:16.983 "data_size": 65536 00:15:16.983 }, 00:15:16.983 { 00:15:16.983 "name": "BaseBdev4", 00:15:16.983 "uuid": "4969722d-8c01-4edd-a73f-28da8936e06d", 00:15:16.983 "is_configured": true, 00:15:16.983 "data_offset": 0, 00:15:16.983 "data_size": 65536 00:15:16.983 } 00:15:16.983 ] 00:15:16.983 } 00:15:16.983 } 00:15:16.983 }' 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:16.983 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:16.983 BaseBdev2 00:15:16.983 BaseBdev3 00:15:16.983 BaseBdev4' 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.243 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.244 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.244 [2024-11-16 18:56:00.710883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.503 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.504 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.504 "name": "Existed_Raid", 00:15:17.504 "uuid": "b34450f8-d33e-49c1-b563-14e5bcededc3", 00:15:17.504 "strip_size_kb": 64, 00:15:17.504 "state": "online", 00:15:17.504 "raid_level": "raid5f", 00:15:17.504 "superblock": false, 00:15:17.504 "num_base_bdevs": 4, 00:15:17.504 "num_base_bdevs_discovered": 3, 00:15:17.504 "num_base_bdevs_operational": 3, 00:15:17.504 "base_bdevs_list": [ 00:15:17.504 { 00:15:17.504 "name": null, 00:15:17.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.504 "is_configured": false, 00:15:17.504 "data_offset": 0, 00:15:17.504 "data_size": 65536 00:15:17.504 }, 00:15:17.504 { 00:15:17.504 "name": "BaseBdev2", 00:15:17.504 "uuid": "cfff2cd6-c50d-4ba9-98c6-b07f3e2279a1", 00:15:17.504 "is_configured": true, 00:15:17.504 "data_offset": 0, 00:15:17.504 "data_size": 65536 00:15:17.504 }, 00:15:17.504 { 00:15:17.504 "name": "BaseBdev3", 00:15:17.504 "uuid": "6ff7a4a5-d8eb-4a4f-84c5-e68ec023183c", 00:15:17.504 "is_configured": true, 00:15:17.504 "data_offset": 0, 00:15:17.504 "data_size": 65536 00:15:17.504 }, 00:15:17.504 { 00:15:17.504 "name": "BaseBdev4", 00:15:17.504 "uuid": "4969722d-8c01-4edd-a73f-28da8936e06d", 00:15:17.504 "is_configured": true, 00:15:17.504 "data_offset": 0, 00:15:17.504 "data_size": 65536 00:15:17.504 } 00:15:17.504 ] 00:15:17.504 }' 00:15:17.504 18:56:00 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.504 18:56:00 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.074 [2024-11-16 18:56:01.319817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:18.074 [2024-11-16 18:56:01.319957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:18.074 [2024-11-16 18:56:01.406805] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.074 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.074 [2024-11-16 18:56:01.458732] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:18.334 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.334 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.335 [2024-11-16 18:56:01.608411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:18.335 [2024-11-16 18:56:01.608504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.335 BaseBdev2 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.335 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.596 [ 00:15:18.596 { 00:15:18.596 "name": "BaseBdev2", 00:15:18.596 "aliases": [ 00:15:18.596 "0099f1be-e1ce-438e-911a-da7facd8a712" 00:15:18.596 ], 00:15:18.596 "product_name": "Malloc disk", 00:15:18.596 "block_size": 512, 00:15:18.596 "num_blocks": 65536, 00:15:18.596 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:18.596 "assigned_rate_limits": { 00:15:18.596 "rw_ios_per_sec": 0, 00:15:18.596 "rw_mbytes_per_sec": 0, 00:15:18.596 "r_mbytes_per_sec": 0, 00:15:18.596 "w_mbytes_per_sec": 0 00:15:18.596 }, 00:15:18.596 "claimed": false, 00:15:18.596 "zoned": false, 00:15:18.596 "supported_io_types": { 00:15:18.596 "read": true, 00:15:18.596 "write": true, 00:15:18.596 "unmap": true, 00:15:18.596 "flush": true, 00:15:18.596 "reset": true, 00:15:18.596 "nvme_admin": false, 00:15:18.596 "nvme_io": false, 00:15:18.596 "nvme_io_md": false, 00:15:18.596 "write_zeroes": true, 00:15:18.596 "zcopy": true, 00:15:18.596 "get_zone_info": false, 00:15:18.596 "zone_management": false, 00:15:18.596 "zone_append": false, 00:15:18.596 "compare": false, 00:15:18.596 "compare_and_write": false, 00:15:18.596 "abort": true, 00:15:18.596 "seek_hole": false, 00:15:18.596 "seek_data": false, 00:15:18.596 "copy": true, 00:15:18.596 "nvme_iov_md": false 00:15:18.596 }, 00:15:18.596 "memory_domains": [ 00:15:18.596 { 00:15:18.596 "dma_device_id": "system", 00:15:18.596 "dma_device_type": 1 00:15:18.596 }, 00:15:18.596 { 00:15:18.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.596 "dma_device_type": 2 00:15:18.596 } 00:15:18.596 ], 00:15:18.596 "driver_specific": {} 00:15:18.596 } 00:15:18.596 ] 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.596 BaseBdev3 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.596 [ 00:15:18.596 { 00:15:18.596 "name": "BaseBdev3", 00:15:18.596 "aliases": [ 00:15:18.596 "1936e03a-9d38-4d5e-befa-237252b807dc" 00:15:18.596 ], 00:15:18.596 "product_name": "Malloc disk", 00:15:18.596 "block_size": 512, 00:15:18.596 "num_blocks": 65536, 00:15:18.596 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:18.596 "assigned_rate_limits": { 00:15:18.596 "rw_ios_per_sec": 0, 00:15:18.596 "rw_mbytes_per_sec": 0, 00:15:18.596 "r_mbytes_per_sec": 0, 00:15:18.596 "w_mbytes_per_sec": 0 00:15:18.596 }, 00:15:18.596 "claimed": false, 00:15:18.596 "zoned": false, 00:15:18.596 "supported_io_types": { 00:15:18.596 "read": true, 00:15:18.596 "write": true, 00:15:18.596 "unmap": true, 00:15:18.596 "flush": true, 00:15:18.596 "reset": true, 00:15:18.596 "nvme_admin": false, 00:15:18.596 "nvme_io": false, 00:15:18.596 "nvme_io_md": false, 00:15:18.596 "write_zeroes": true, 00:15:18.596 "zcopy": true, 00:15:18.596 "get_zone_info": false, 00:15:18.596 "zone_management": false, 00:15:18.596 "zone_append": false, 00:15:18.596 "compare": false, 00:15:18.596 "compare_and_write": false, 00:15:18.596 "abort": true, 00:15:18.596 "seek_hole": false, 00:15:18.596 "seek_data": false, 00:15:18.596 "copy": true, 00:15:18.596 "nvme_iov_md": false 00:15:18.596 }, 00:15:18.596 "memory_domains": [ 00:15:18.596 { 00:15:18.596 "dma_device_id": "system", 00:15:18.596 "dma_device_type": 1 00:15:18.596 }, 00:15:18.596 { 00:15:18.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.596 "dma_device_type": 2 00:15:18.596 } 00:15:18.596 ], 00:15:18.596 "driver_specific": {} 00:15:18.596 } 00:15:18.596 ] 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.596 BaseBdev4 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.596 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 [ 00:15:18.597 { 00:15:18.597 "name": "BaseBdev4", 00:15:18.597 "aliases": [ 00:15:18.597 "40400ec6-f3cb-4ff4-883e-247e46bd69ea" 00:15:18.597 ], 00:15:18.597 "product_name": "Malloc disk", 00:15:18.597 "block_size": 512, 00:15:18.597 "num_blocks": 65536, 00:15:18.597 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:18.597 "assigned_rate_limits": { 00:15:18.597 "rw_ios_per_sec": 0, 00:15:18.597 "rw_mbytes_per_sec": 0, 00:15:18.597 "r_mbytes_per_sec": 0, 00:15:18.597 "w_mbytes_per_sec": 0 00:15:18.597 }, 00:15:18.597 "claimed": false, 00:15:18.597 "zoned": false, 00:15:18.597 "supported_io_types": { 00:15:18.597 "read": true, 00:15:18.597 "write": true, 00:15:18.597 "unmap": true, 00:15:18.597 "flush": true, 00:15:18.597 "reset": true, 00:15:18.597 "nvme_admin": false, 00:15:18.597 "nvme_io": false, 00:15:18.597 "nvme_io_md": false, 00:15:18.597 "write_zeroes": true, 00:15:18.597 "zcopy": true, 00:15:18.597 "get_zone_info": false, 00:15:18.597 "zone_management": false, 00:15:18.597 "zone_append": false, 00:15:18.597 "compare": false, 00:15:18.597 "compare_and_write": false, 00:15:18.597 "abort": true, 00:15:18.597 "seek_hole": false, 00:15:18.597 "seek_data": false, 00:15:18.597 "copy": true, 00:15:18.597 "nvme_iov_md": false 00:15:18.597 }, 00:15:18.597 "memory_domains": [ 00:15:18.597 { 00:15:18.597 "dma_device_id": "system", 00:15:18.597 "dma_device_type": 1 00:15:18.597 }, 00:15:18.597 { 00:15:18.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:18.597 "dma_device_type": 2 00:15:18.597 } 00:15:18.597 ], 00:15:18.597 "driver_specific": {} 00:15:18.597 } 00:15:18.597 ] 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 [2024-11-16 18:56:01.990564] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:18.597 [2024-11-16 18:56:01.990655] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:18.597 [2024-11-16 18:56:01.990714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:18.597 [2024-11-16 18:56:01.992480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:18.597 [2024-11-16 18:56:01.992572] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.597 18:56:01 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.597 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:18.597 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.597 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.597 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.597 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.597 "name": "Existed_Raid", 00:15:18.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.597 "strip_size_kb": 64, 00:15:18.597 "state": "configuring", 00:15:18.597 "raid_level": "raid5f", 00:15:18.597 "superblock": false, 00:15:18.597 "num_base_bdevs": 4, 00:15:18.597 "num_base_bdevs_discovered": 3, 00:15:18.597 "num_base_bdevs_operational": 4, 00:15:18.597 "base_bdevs_list": [ 00:15:18.597 { 00:15:18.597 "name": "BaseBdev1", 00:15:18.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:18.597 "is_configured": false, 00:15:18.597 "data_offset": 0, 00:15:18.597 "data_size": 0 00:15:18.597 }, 00:15:18.597 { 00:15:18.597 "name": "BaseBdev2", 00:15:18.597 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:18.597 "is_configured": true, 00:15:18.597 "data_offset": 0, 00:15:18.597 "data_size": 65536 00:15:18.597 }, 00:15:18.597 { 00:15:18.597 "name": "BaseBdev3", 00:15:18.597 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:18.597 "is_configured": true, 00:15:18.597 "data_offset": 0, 00:15:18.597 "data_size": 65536 00:15:18.597 }, 00:15:18.597 { 00:15:18.598 "name": "BaseBdev4", 00:15:18.598 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:18.598 "is_configured": true, 00:15:18.598 "data_offset": 0, 00:15:18.598 "data_size": 65536 00:15:18.598 } 00:15:18.598 ] 00:15:18.598 }' 00:15:18.598 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.598 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.168 [2024-11-16 18:56:02.397866] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.168 "name": "Existed_Raid", 00:15:19.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.168 "strip_size_kb": 64, 00:15:19.168 "state": "configuring", 00:15:19.168 "raid_level": "raid5f", 00:15:19.168 "superblock": false, 00:15:19.168 "num_base_bdevs": 4, 00:15:19.168 "num_base_bdevs_discovered": 2, 00:15:19.168 "num_base_bdevs_operational": 4, 00:15:19.168 "base_bdevs_list": [ 00:15:19.168 { 00:15:19.168 "name": "BaseBdev1", 00:15:19.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.168 "is_configured": false, 00:15:19.168 "data_offset": 0, 00:15:19.168 "data_size": 0 00:15:19.168 }, 00:15:19.168 { 00:15:19.168 "name": null, 00:15:19.168 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:19.168 "is_configured": false, 00:15:19.168 "data_offset": 0, 00:15:19.168 "data_size": 65536 00:15:19.168 }, 00:15:19.168 { 00:15:19.168 "name": "BaseBdev3", 00:15:19.168 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:19.168 "is_configured": true, 00:15:19.168 "data_offset": 0, 00:15:19.168 "data_size": 65536 00:15:19.168 }, 00:15:19.168 { 00:15:19.168 "name": "BaseBdev4", 00:15:19.168 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:19.168 "is_configured": true, 00:15:19.168 "data_offset": 0, 00:15:19.168 "data_size": 65536 00:15:19.168 } 00:15:19.168 ] 00:15:19.168 }' 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.168 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.428 [2024-11-16 18:56:02.884763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.428 BaseBdev1 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.428 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.688 [ 00:15:19.688 { 00:15:19.688 "name": "BaseBdev1", 00:15:19.688 "aliases": [ 00:15:19.688 "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40" 00:15:19.688 ], 00:15:19.688 "product_name": "Malloc disk", 00:15:19.688 "block_size": 512, 00:15:19.688 "num_blocks": 65536, 00:15:19.688 "uuid": "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40", 00:15:19.688 "assigned_rate_limits": { 00:15:19.688 "rw_ios_per_sec": 0, 00:15:19.688 "rw_mbytes_per_sec": 0, 00:15:19.688 "r_mbytes_per_sec": 0, 00:15:19.688 "w_mbytes_per_sec": 0 00:15:19.688 }, 00:15:19.688 "claimed": true, 00:15:19.688 "claim_type": "exclusive_write", 00:15:19.688 "zoned": false, 00:15:19.688 "supported_io_types": { 00:15:19.688 "read": true, 00:15:19.688 "write": true, 00:15:19.688 "unmap": true, 00:15:19.688 "flush": true, 00:15:19.688 "reset": true, 00:15:19.688 "nvme_admin": false, 00:15:19.688 "nvme_io": false, 00:15:19.688 "nvme_io_md": false, 00:15:19.688 "write_zeroes": true, 00:15:19.688 "zcopy": true, 00:15:19.688 "get_zone_info": false, 00:15:19.688 "zone_management": false, 00:15:19.688 "zone_append": false, 00:15:19.688 "compare": false, 00:15:19.688 "compare_and_write": false, 00:15:19.688 "abort": true, 00:15:19.688 "seek_hole": false, 00:15:19.688 "seek_data": false, 00:15:19.688 "copy": true, 00:15:19.688 "nvme_iov_md": false 00:15:19.688 }, 00:15:19.688 "memory_domains": [ 00:15:19.688 { 00:15:19.688 "dma_device_id": "system", 00:15:19.688 "dma_device_type": 1 00:15:19.688 }, 00:15:19.688 { 00:15:19.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:19.688 "dma_device_type": 2 00:15:19.688 } 00:15:19.688 ], 00:15:19.688 "driver_specific": {} 00:15:19.688 } 00:15:19.688 ] 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.688 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.689 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:19.689 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.689 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.689 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.689 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.689 "name": "Existed_Raid", 00:15:19.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.689 "strip_size_kb": 64, 00:15:19.689 "state": "configuring", 00:15:19.689 "raid_level": "raid5f", 00:15:19.689 "superblock": false, 00:15:19.689 "num_base_bdevs": 4, 00:15:19.689 "num_base_bdevs_discovered": 3, 00:15:19.689 "num_base_bdevs_operational": 4, 00:15:19.689 "base_bdevs_list": [ 00:15:19.689 { 00:15:19.689 "name": "BaseBdev1", 00:15:19.689 "uuid": "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40", 00:15:19.689 "is_configured": true, 00:15:19.689 "data_offset": 0, 00:15:19.689 "data_size": 65536 00:15:19.689 }, 00:15:19.689 { 00:15:19.689 "name": null, 00:15:19.689 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:19.689 "is_configured": false, 00:15:19.689 "data_offset": 0, 00:15:19.689 "data_size": 65536 00:15:19.689 }, 00:15:19.689 { 00:15:19.689 "name": "BaseBdev3", 00:15:19.689 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:19.689 "is_configured": true, 00:15:19.689 "data_offset": 0, 00:15:19.689 "data_size": 65536 00:15:19.689 }, 00:15:19.689 { 00:15:19.689 "name": "BaseBdev4", 00:15:19.689 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:19.689 "is_configured": true, 00:15:19.689 "data_offset": 0, 00:15:19.689 "data_size": 65536 00:15:19.689 } 00:15:19.689 ] 00:15:19.689 }' 00:15:19.689 18:56:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.689 18:56:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.949 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.949 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.949 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.949 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:19.949 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.949 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:19.949 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:19.949 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.949 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.208 [2024-11-16 18:56:03.419897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:20.208 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.208 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:20.208 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.208 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.208 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.208 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.208 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.209 "name": "Existed_Raid", 00:15:20.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.209 "strip_size_kb": 64, 00:15:20.209 "state": "configuring", 00:15:20.209 "raid_level": "raid5f", 00:15:20.209 "superblock": false, 00:15:20.209 "num_base_bdevs": 4, 00:15:20.209 "num_base_bdevs_discovered": 2, 00:15:20.209 "num_base_bdevs_operational": 4, 00:15:20.209 "base_bdevs_list": [ 00:15:20.209 { 00:15:20.209 "name": "BaseBdev1", 00:15:20.209 "uuid": "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40", 00:15:20.209 "is_configured": true, 00:15:20.209 "data_offset": 0, 00:15:20.209 "data_size": 65536 00:15:20.209 }, 00:15:20.209 { 00:15:20.209 "name": null, 00:15:20.209 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:20.209 "is_configured": false, 00:15:20.209 "data_offset": 0, 00:15:20.209 "data_size": 65536 00:15:20.209 }, 00:15:20.209 { 00:15:20.209 "name": null, 00:15:20.209 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:20.209 "is_configured": false, 00:15:20.209 "data_offset": 0, 00:15:20.209 "data_size": 65536 00:15:20.209 }, 00:15:20.209 { 00:15:20.209 "name": "BaseBdev4", 00:15:20.209 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:20.209 "is_configured": true, 00:15:20.209 "data_offset": 0, 00:15:20.209 "data_size": 65536 00:15:20.209 } 00:15:20.209 ] 00:15:20.209 }' 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.209 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.468 [2024-11-16 18:56:03.927039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:20.468 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:20.469 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.728 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.728 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.728 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.728 "name": "Existed_Raid", 00:15:20.728 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.728 "strip_size_kb": 64, 00:15:20.728 "state": "configuring", 00:15:20.728 "raid_level": "raid5f", 00:15:20.728 "superblock": false, 00:15:20.728 "num_base_bdevs": 4, 00:15:20.728 "num_base_bdevs_discovered": 3, 00:15:20.728 "num_base_bdevs_operational": 4, 00:15:20.728 "base_bdevs_list": [ 00:15:20.728 { 00:15:20.728 "name": "BaseBdev1", 00:15:20.728 "uuid": "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40", 00:15:20.728 "is_configured": true, 00:15:20.728 "data_offset": 0, 00:15:20.728 "data_size": 65536 00:15:20.728 }, 00:15:20.728 { 00:15:20.728 "name": null, 00:15:20.728 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:20.728 "is_configured": false, 00:15:20.728 "data_offset": 0, 00:15:20.728 "data_size": 65536 00:15:20.728 }, 00:15:20.728 { 00:15:20.728 "name": "BaseBdev3", 00:15:20.728 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:20.728 "is_configured": true, 00:15:20.728 "data_offset": 0, 00:15:20.728 "data_size": 65536 00:15:20.728 }, 00:15:20.728 { 00:15:20.728 "name": "BaseBdev4", 00:15:20.728 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:20.728 "is_configured": true, 00:15:20.728 "data_offset": 0, 00:15:20.728 "data_size": 65536 00:15:20.728 } 00:15:20.728 ] 00:15:20.728 }' 00:15:20.728 18:56:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.728 18:56:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.988 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.988 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:20.988 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.988 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.988 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.988 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:20.988 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:20.988 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.988 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.988 [2024-11-16 18:56:04.402259] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.247 "name": "Existed_Raid", 00:15:21.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.247 "strip_size_kb": 64, 00:15:21.247 "state": "configuring", 00:15:21.247 "raid_level": "raid5f", 00:15:21.247 "superblock": false, 00:15:21.247 "num_base_bdevs": 4, 00:15:21.247 "num_base_bdevs_discovered": 2, 00:15:21.247 "num_base_bdevs_operational": 4, 00:15:21.247 "base_bdevs_list": [ 00:15:21.247 { 00:15:21.247 "name": null, 00:15:21.247 "uuid": "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40", 00:15:21.247 "is_configured": false, 00:15:21.247 "data_offset": 0, 00:15:21.247 "data_size": 65536 00:15:21.247 }, 00:15:21.247 { 00:15:21.247 "name": null, 00:15:21.247 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:21.247 "is_configured": false, 00:15:21.247 "data_offset": 0, 00:15:21.247 "data_size": 65536 00:15:21.247 }, 00:15:21.247 { 00:15:21.247 "name": "BaseBdev3", 00:15:21.247 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:21.247 "is_configured": true, 00:15:21.247 "data_offset": 0, 00:15:21.247 "data_size": 65536 00:15:21.247 }, 00:15:21.247 { 00:15:21.247 "name": "BaseBdev4", 00:15:21.247 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:21.247 "is_configured": true, 00:15:21.247 "data_offset": 0, 00:15:21.247 "data_size": 65536 00:15:21.247 } 00:15:21.247 ] 00:15:21.247 }' 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.247 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.507 [2024-11-16 18:56:04.965927] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.507 18:56:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:21.767 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.767 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.767 18:56:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.767 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.767 "name": "Existed_Raid", 00:15:21.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.767 "strip_size_kb": 64, 00:15:21.767 "state": "configuring", 00:15:21.767 "raid_level": "raid5f", 00:15:21.767 "superblock": false, 00:15:21.767 "num_base_bdevs": 4, 00:15:21.767 "num_base_bdevs_discovered": 3, 00:15:21.767 "num_base_bdevs_operational": 4, 00:15:21.767 "base_bdevs_list": [ 00:15:21.767 { 00:15:21.767 "name": null, 00:15:21.767 "uuid": "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40", 00:15:21.767 "is_configured": false, 00:15:21.767 "data_offset": 0, 00:15:21.767 "data_size": 65536 00:15:21.767 }, 00:15:21.767 { 00:15:21.767 "name": "BaseBdev2", 00:15:21.767 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:21.767 "is_configured": true, 00:15:21.767 "data_offset": 0, 00:15:21.767 "data_size": 65536 00:15:21.767 }, 00:15:21.767 { 00:15:21.767 "name": "BaseBdev3", 00:15:21.767 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:21.767 "is_configured": true, 00:15:21.767 "data_offset": 0, 00:15:21.767 "data_size": 65536 00:15:21.767 }, 00:15:21.767 { 00:15:21.767 "name": "BaseBdev4", 00:15:21.767 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:21.767 "is_configured": true, 00:15:21.767 "data_offset": 0, 00:15:21.767 "data_size": 65536 00:15:21.767 } 00:15:21.767 ] 00:15:21.767 }' 00:15:21.767 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.767 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.027 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.027 [2024-11-16 18:56:05.492092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:22.027 [2024-11-16 18:56:05.492197] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:22.027 [2024-11-16 18:56:05.492222] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:22.027 [2024-11-16 18:56:05.492511] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:22.286 [2024-11-16 18:56:05.499042] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:22.286 [2024-11-16 18:56:05.499098] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:22.286 [2024-11-16 18:56:05.499398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.286 NewBaseBdev 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.286 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.286 [ 00:15:22.286 { 00:15:22.286 "name": "NewBaseBdev", 00:15:22.286 "aliases": [ 00:15:22.286 "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40" 00:15:22.286 ], 00:15:22.286 "product_name": "Malloc disk", 00:15:22.286 "block_size": 512, 00:15:22.286 "num_blocks": 65536, 00:15:22.286 "uuid": "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40", 00:15:22.286 "assigned_rate_limits": { 00:15:22.286 "rw_ios_per_sec": 0, 00:15:22.287 "rw_mbytes_per_sec": 0, 00:15:22.287 "r_mbytes_per_sec": 0, 00:15:22.287 "w_mbytes_per_sec": 0 00:15:22.287 }, 00:15:22.287 "claimed": true, 00:15:22.287 "claim_type": "exclusive_write", 00:15:22.287 "zoned": false, 00:15:22.287 "supported_io_types": { 00:15:22.287 "read": true, 00:15:22.287 "write": true, 00:15:22.287 "unmap": true, 00:15:22.287 "flush": true, 00:15:22.287 "reset": true, 00:15:22.287 "nvme_admin": false, 00:15:22.287 "nvme_io": false, 00:15:22.287 "nvme_io_md": false, 00:15:22.287 "write_zeroes": true, 00:15:22.287 "zcopy": true, 00:15:22.287 "get_zone_info": false, 00:15:22.287 "zone_management": false, 00:15:22.287 "zone_append": false, 00:15:22.287 "compare": false, 00:15:22.287 "compare_and_write": false, 00:15:22.287 "abort": true, 00:15:22.287 "seek_hole": false, 00:15:22.287 "seek_data": false, 00:15:22.287 "copy": true, 00:15:22.287 "nvme_iov_md": false 00:15:22.287 }, 00:15:22.287 "memory_domains": [ 00:15:22.287 { 00:15:22.287 "dma_device_id": "system", 00:15:22.287 "dma_device_type": 1 00:15:22.287 }, 00:15:22.287 { 00:15:22.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:22.287 "dma_device_type": 2 00:15:22.287 } 00:15:22.287 ], 00:15:22.287 "driver_specific": {} 00:15:22.287 } 00:15:22.287 ] 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:22.287 "name": "Existed_Raid", 00:15:22.287 "uuid": "d0331c9f-8bdc-4756-9c99-31b42cec3dc3", 00:15:22.287 "strip_size_kb": 64, 00:15:22.287 "state": "online", 00:15:22.287 "raid_level": "raid5f", 00:15:22.287 "superblock": false, 00:15:22.287 "num_base_bdevs": 4, 00:15:22.287 "num_base_bdevs_discovered": 4, 00:15:22.287 "num_base_bdevs_operational": 4, 00:15:22.287 "base_bdevs_list": [ 00:15:22.287 { 00:15:22.287 "name": "NewBaseBdev", 00:15:22.287 "uuid": "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40", 00:15:22.287 "is_configured": true, 00:15:22.287 "data_offset": 0, 00:15:22.287 "data_size": 65536 00:15:22.287 }, 00:15:22.287 { 00:15:22.287 "name": "BaseBdev2", 00:15:22.287 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:22.287 "is_configured": true, 00:15:22.287 "data_offset": 0, 00:15:22.287 "data_size": 65536 00:15:22.287 }, 00:15:22.287 { 00:15:22.287 "name": "BaseBdev3", 00:15:22.287 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:22.287 "is_configured": true, 00:15:22.287 "data_offset": 0, 00:15:22.287 "data_size": 65536 00:15:22.287 }, 00:15:22.287 { 00:15:22.287 "name": "BaseBdev4", 00:15:22.287 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:22.287 "is_configured": true, 00:15:22.287 "data_offset": 0, 00:15:22.287 "data_size": 65536 00:15:22.287 } 00:15:22.287 ] 00:15:22.287 }' 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:22.287 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.547 [2024-11-16 18:56:05.962822] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.547 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:22.547 "name": "Existed_Raid", 00:15:22.547 "aliases": [ 00:15:22.547 "d0331c9f-8bdc-4756-9c99-31b42cec3dc3" 00:15:22.547 ], 00:15:22.547 "product_name": "Raid Volume", 00:15:22.547 "block_size": 512, 00:15:22.547 "num_blocks": 196608, 00:15:22.547 "uuid": "d0331c9f-8bdc-4756-9c99-31b42cec3dc3", 00:15:22.547 "assigned_rate_limits": { 00:15:22.547 "rw_ios_per_sec": 0, 00:15:22.547 "rw_mbytes_per_sec": 0, 00:15:22.547 "r_mbytes_per_sec": 0, 00:15:22.547 "w_mbytes_per_sec": 0 00:15:22.547 }, 00:15:22.547 "claimed": false, 00:15:22.547 "zoned": false, 00:15:22.547 "supported_io_types": { 00:15:22.547 "read": true, 00:15:22.547 "write": true, 00:15:22.547 "unmap": false, 00:15:22.547 "flush": false, 00:15:22.547 "reset": true, 00:15:22.547 "nvme_admin": false, 00:15:22.547 "nvme_io": false, 00:15:22.547 "nvme_io_md": false, 00:15:22.547 "write_zeroes": true, 00:15:22.547 "zcopy": false, 00:15:22.547 "get_zone_info": false, 00:15:22.547 "zone_management": false, 00:15:22.547 "zone_append": false, 00:15:22.547 "compare": false, 00:15:22.547 "compare_and_write": false, 00:15:22.547 "abort": false, 00:15:22.547 "seek_hole": false, 00:15:22.547 "seek_data": false, 00:15:22.547 "copy": false, 00:15:22.547 "nvme_iov_md": false 00:15:22.547 }, 00:15:22.547 "driver_specific": { 00:15:22.547 "raid": { 00:15:22.547 "uuid": "d0331c9f-8bdc-4756-9c99-31b42cec3dc3", 00:15:22.547 "strip_size_kb": 64, 00:15:22.547 "state": "online", 00:15:22.547 "raid_level": "raid5f", 00:15:22.547 "superblock": false, 00:15:22.547 "num_base_bdevs": 4, 00:15:22.547 "num_base_bdevs_discovered": 4, 00:15:22.547 "num_base_bdevs_operational": 4, 00:15:22.547 "base_bdevs_list": [ 00:15:22.547 { 00:15:22.547 "name": "NewBaseBdev", 00:15:22.547 "uuid": "34cfcc8b-3f64-4a1e-b2e2-2e3332d49f40", 00:15:22.547 "is_configured": true, 00:15:22.547 "data_offset": 0, 00:15:22.547 "data_size": 65536 00:15:22.547 }, 00:15:22.547 { 00:15:22.547 "name": "BaseBdev2", 00:15:22.547 "uuid": "0099f1be-e1ce-438e-911a-da7facd8a712", 00:15:22.547 "is_configured": true, 00:15:22.547 "data_offset": 0, 00:15:22.547 "data_size": 65536 00:15:22.547 }, 00:15:22.547 { 00:15:22.547 "name": "BaseBdev3", 00:15:22.547 "uuid": "1936e03a-9d38-4d5e-befa-237252b807dc", 00:15:22.547 "is_configured": true, 00:15:22.547 "data_offset": 0, 00:15:22.547 "data_size": 65536 00:15:22.548 }, 00:15:22.548 { 00:15:22.548 "name": "BaseBdev4", 00:15:22.548 "uuid": "40400ec6-f3cb-4ff4-883e-247e46bd69ea", 00:15:22.548 "is_configured": true, 00:15:22.548 "data_offset": 0, 00:15:22.548 "data_size": 65536 00:15:22.548 } 00:15:22.548 ] 00:15:22.548 } 00:15:22.548 } 00:15:22.548 }' 00:15:22.548 18:56:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:22.548 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:22.548 BaseBdev2 00:15:22.548 BaseBdev3 00:15:22.548 BaseBdev4' 00:15:22.548 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.808 [2024-11-16 18:56:06.234126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:22.808 [2024-11-16 18:56:06.234151] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.808 [2024-11-16 18:56:06.234214] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.808 [2024-11-16 18:56:06.234486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.808 [2024-11-16 18:56:06.234495] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82444 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82444 ']' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82444 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82444 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:22.808 killing process with pid 82444 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82444' 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82444 00:15:22.808 [2024-11-16 18:56:06.277072] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.808 18:56:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82444 00:15:23.377 [2024-11-16 18:56:06.646658] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:24.316 00:15:24.316 real 0m11.047s 00:15:24.316 user 0m17.606s 00:15:24.316 sys 0m1.986s 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.316 ************************************ 00:15:24.316 END TEST raid5f_state_function_test 00:15:24.316 ************************************ 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.316 18:56:07 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:15:24.316 18:56:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:24.316 18:56:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.316 18:56:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:24.316 ************************************ 00:15:24.316 START TEST raid5f_state_function_test_sb 00:15:24.316 ************************************ 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:24.316 Process raid pid: 83110 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83110 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83110' 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83110 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83110 ']' 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:24.316 18:56:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:24.576 [2024-11-16 18:56:07.838336] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:24.576 [2024-11-16 18:56:07.838540] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.576 [2024-11-16 18:56:08.009384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.835 [2024-11-16 18:56:08.116023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.095 [2024-11-16 18:56:08.310824] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.095 [2024-11-16 18:56:08.310937] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.356 [2024-11-16 18:56:08.662739] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.356 [2024-11-16 18:56:08.662830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.356 [2024-11-16 18:56:08.662859] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.356 [2024-11-16 18:56:08.662883] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.356 [2024-11-16 18:56:08.662901] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.356 [2024-11-16 18:56:08.662921] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.356 [2024-11-16 18:56:08.662938] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.356 [2024-11-16 18:56:08.662958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.356 "name": "Existed_Raid", 00:15:25.356 "uuid": "3e69e883-04f3-43df-942e-ce975eb3e14b", 00:15:25.356 "strip_size_kb": 64, 00:15:25.356 "state": "configuring", 00:15:25.356 "raid_level": "raid5f", 00:15:25.356 "superblock": true, 00:15:25.356 "num_base_bdevs": 4, 00:15:25.356 "num_base_bdevs_discovered": 0, 00:15:25.356 "num_base_bdevs_operational": 4, 00:15:25.356 "base_bdevs_list": [ 00:15:25.356 { 00:15:25.356 "name": "BaseBdev1", 00:15:25.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.356 "is_configured": false, 00:15:25.356 "data_offset": 0, 00:15:25.356 "data_size": 0 00:15:25.356 }, 00:15:25.356 { 00:15:25.356 "name": "BaseBdev2", 00:15:25.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.356 "is_configured": false, 00:15:25.356 "data_offset": 0, 00:15:25.356 "data_size": 0 00:15:25.356 }, 00:15:25.356 { 00:15:25.356 "name": "BaseBdev3", 00:15:25.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.356 "is_configured": false, 00:15:25.356 "data_offset": 0, 00:15:25.356 "data_size": 0 00:15:25.356 }, 00:15:25.356 { 00:15:25.356 "name": "BaseBdev4", 00:15:25.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.356 "is_configured": false, 00:15:25.356 "data_offset": 0, 00:15:25.356 "data_size": 0 00:15:25.356 } 00:15:25.356 ] 00:15:25.356 }' 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.356 18:56:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 [2024-11-16 18:56:09.109876] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.926 [2024-11-16 18:56:09.109909] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 [2024-11-16 18:56:09.121866] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.926 [2024-11-16 18:56:09.121906] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.926 [2024-11-16 18:56:09.121915] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.926 [2024-11-16 18:56:09.121939] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.926 [2024-11-16 18:56:09.121945] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.926 [2024-11-16 18:56:09.121953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.926 [2024-11-16 18:56:09.121959] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.926 [2024-11-16 18:56:09.121967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 [2024-11-16 18:56:09.162872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.926 BaseBdev1 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 [ 00:15:25.926 { 00:15:25.926 "name": "BaseBdev1", 00:15:25.926 "aliases": [ 00:15:25.926 "cfb025f1-5b79-464b-8bbb-f80d538f2f6a" 00:15:25.926 ], 00:15:25.926 "product_name": "Malloc disk", 00:15:25.926 "block_size": 512, 00:15:25.926 "num_blocks": 65536, 00:15:25.926 "uuid": "cfb025f1-5b79-464b-8bbb-f80d538f2f6a", 00:15:25.926 "assigned_rate_limits": { 00:15:25.926 "rw_ios_per_sec": 0, 00:15:25.926 "rw_mbytes_per_sec": 0, 00:15:25.926 "r_mbytes_per_sec": 0, 00:15:25.926 "w_mbytes_per_sec": 0 00:15:25.926 }, 00:15:25.926 "claimed": true, 00:15:25.926 "claim_type": "exclusive_write", 00:15:25.926 "zoned": false, 00:15:25.926 "supported_io_types": { 00:15:25.926 "read": true, 00:15:25.926 "write": true, 00:15:25.926 "unmap": true, 00:15:25.926 "flush": true, 00:15:25.926 "reset": true, 00:15:25.926 "nvme_admin": false, 00:15:25.926 "nvme_io": false, 00:15:25.926 "nvme_io_md": false, 00:15:25.926 "write_zeroes": true, 00:15:25.926 "zcopy": true, 00:15:25.926 "get_zone_info": false, 00:15:25.926 "zone_management": false, 00:15:25.926 "zone_append": false, 00:15:25.926 "compare": false, 00:15:25.926 "compare_and_write": false, 00:15:25.926 "abort": true, 00:15:25.926 "seek_hole": false, 00:15:25.926 "seek_data": false, 00:15:25.926 "copy": true, 00:15:25.926 "nvme_iov_md": false 00:15:25.926 }, 00:15:25.926 "memory_domains": [ 00:15:25.926 { 00:15:25.926 "dma_device_id": "system", 00:15:25.926 "dma_device_type": 1 00:15:25.926 }, 00:15:25.926 { 00:15:25.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:25.926 "dma_device_type": 2 00:15:25.926 } 00:15:25.926 ], 00:15:25.926 "driver_specific": {} 00:15:25.926 } 00:15:25.926 ] 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.926 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.926 "name": "Existed_Raid", 00:15:25.926 "uuid": "e1112e64-7e2e-46fe-9933-1653480847e5", 00:15:25.926 "strip_size_kb": 64, 00:15:25.926 "state": "configuring", 00:15:25.926 "raid_level": "raid5f", 00:15:25.926 "superblock": true, 00:15:25.926 "num_base_bdevs": 4, 00:15:25.927 "num_base_bdevs_discovered": 1, 00:15:25.927 "num_base_bdevs_operational": 4, 00:15:25.927 "base_bdevs_list": [ 00:15:25.927 { 00:15:25.927 "name": "BaseBdev1", 00:15:25.927 "uuid": "cfb025f1-5b79-464b-8bbb-f80d538f2f6a", 00:15:25.927 "is_configured": true, 00:15:25.927 "data_offset": 2048, 00:15:25.927 "data_size": 63488 00:15:25.927 }, 00:15:25.927 { 00:15:25.927 "name": "BaseBdev2", 00:15:25.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.927 "is_configured": false, 00:15:25.927 "data_offset": 0, 00:15:25.927 "data_size": 0 00:15:25.927 }, 00:15:25.927 { 00:15:25.927 "name": "BaseBdev3", 00:15:25.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.927 "is_configured": false, 00:15:25.927 "data_offset": 0, 00:15:25.927 "data_size": 0 00:15:25.927 }, 00:15:25.927 { 00:15:25.927 "name": "BaseBdev4", 00:15:25.927 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.927 "is_configured": false, 00:15:25.927 "data_offset": 0, 00:15:25.927 "data_size": 0 00:15:25.927 } 00:15:25.927 ] 00:15:25.927 }' 00:15:25.927 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.927 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.202 [2024-11-16 18:56:09.606119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:26.202 [2024-11-16 18:56:09.606158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.202 [2024-11-16 18:56:09.614165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.202 [2024-11-16 18:56:09.615860] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:26.202 [2024-11-16 18:56:09.615942] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:26.202 [2024-11-16 18:56:09.615979] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:26.202 [2024-11-16 18:56:09.615991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:26.202 [2024-11-16 18:56:09.615998] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:26.202 [2024-11-16 18:56:09.616006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.202 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.497 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.497 "name": "Existed_Raid", 00:15:26.497 "uuid": "3a01dbed-f0e8-486e-b5b9-8198fd5114c7", 00:15:26.497 "strip_size_kb": 64, 00:15:26.497 "state": "configuring", 00:15:26.497 "raid_level": "raid5f", 00:15:26.497 "superblock": true, 00:15:26.497 "num_base_bdevs": 4, 00:15:26.497 "num_base_bdevs_discovered": 1, 00:15:26.497 "num_base_bdevs_operational": 4, 00:15:26.497 "base_bdevs_list": [ 00:15:26.497 { 00:15:26.497 "name": "BaseBdev1", 00:15:26.497 "uuid": "cfb025f1-5b79-464b-8bbb-f80d538f2f6a", 00:15:26.497 "is_configured": true, 00:15:26.497 "data_offset": 2048, 00:15:26.497 "data_size": 63488 00:15:26.497 }, 00:15:26.497 { 00:15:26.497 "name": "BaseBdev2", 00:15:26.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.497 "is_configured": false, 00:15:26.497 "data_offset": 0, 00:15:26.497 "data_size": 0 00:15:26.497 }, 00:15:26.497 { 00:15:26.497 "name": "BaseBdev3", 00:15:26.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.497 "is_configured": false, 00:15:26.497 "data_offset": 0, 00:15:26.497 "data_size": 0 00:15:26.497 }, 00:15:26.497 { 00:15:26.497 "name": "BaseBdev4", 00:15:26.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.497 "is_configured": false, 00:15:26.497 "data_offset": 0, 00:15:26.497 "data_size": 0 00:15:26.497 } 00:15:26.497 ] 00:15:26.497 }' 00:15:26.497 18:56:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.497 18:56:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.767 [2024-11-16 18:56:10.082230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.767 BaseBdev2 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.767 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.767 [ 00:15:26.767 { 00:15:26.767 "name": "BaseBdev2", 00:15:26.768 "aliases": [ 00:15:26.768 "4fdde45a-20b0-44d8-9068-1c1601c44737" 00:15:26.768 ], 00:15:26.768 "product_name": "Malloc disk", 00:15:26.768 "block_size": 512, 00:15:26.768 "num_blocks": 65536, 00:15:26.768 "uuid": "4fdde45a-20b0-44d8-9068-1c1601c44737", 00:15:26.768 "assigned_rate_limits": { 00:15:26.768 "rw_ios_per_sec": 0, 00:15:26.768 "rw_mbytes_per_sec": 0, 00:15:26.768 "r_mbytes_per_sec": 0, 00:15:26.768 "w_mbytes_per_sec": 0 00:15:26.768 }, 00:15:26.768 "claimed": true, 00:15:26.768 "claim_type": "exclusive_write", 00:15:26.768 "zoned": false, 00:15:26.768 "supported_io_types": { 00:15:26.768 "read": true, 00:15:26.768 "write": true, 00:15:26.768 "unmap": true, 00:15:26.768 "flush": true, 00:15:26.768 "reset": true, 00:15:26.768 "nvme_admin": false, 00:15:26.768 "nvme_io": false, 00:15:26.768 "nvme_io_md": false, 00:15:26.768 "write_zeroes": true, 00:15:26.768 "zcopy": true, 00:15:26.768 "get_zone_info": false, 00:15:26.768 "zone_management": false, 00:15:26.768 "zone_append": false, 00:15:26.768 "compare": false, 00:15:26.768 "compare_and_write": false, 00:15:26.768 "abort": true, 00:15:26.768 "seek_hole": false, 00:15:26.768 "seek_data": false, 00:15:26.768 "copy": true, 00:15:26.768 "nvme_iov_md": false 00:15:26.768 }, 00:15:26.768 "memory_domains": [ 00:15:26.768 { 00:15:26.768 "dma_device_id": "system", 00:15:26.768 "dma_device_type": 1 00:15:26.768 }, 00:15:26.768 { 00:15:26.768 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.768 "dma_device_type": 2 00:15:26.768 } 00:15:26.768 ], 00:15:26.768 "driver_specific": {} 00:15:26.768 } 00:15:26.768 ] 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.768 "name": "Existed_Raid", 00:15:26.768 "uuid": "3a01dbed-f0e8-486e-b5b9-8198fd5114c7", 00:15:26.768 "strip_size_kb": 64, 00:15:26.768 "state": "configuring", 00:15:26.768 "raid_level": "raid5f", 00:15:26.768 "superblock": true, 00:15:26.768 "num_base_bdevs": 4, 00:15:26.768 "num_base_bdevs_discovered": 2, 00:15:26.768 "num_base_bdevs_operational": 4, 00:15:26.768 "base_bdevs_list": [ 00:15:26.768 { 00:15:26.768 "name": "BaseBdev1", 00:15:26.768 "uuid": "cfb025f1-5b79-464b-8bbb-f80d538f2f6a", 00:15:26.768 "is_configured": true, 00:15:26.768 "data_offset": 2048, 00:15:26.768 "data_size": 63488 00:15:26.768 }, 00:15:26.768 { 00:15:26.768 "name": "BaseBdev2", 00:15:26.768 "uuid": "4fdde45a-20b0-44d8-9068-1c1601c44737", 00:15:26.768 "is_configured": true, 00:15:26.768 "data_offset": 2048, 00:15:26.768 "data_size": 63488 00:15:26.768 }, 00:15:26.768 { 00:15:26.768 "name": "BaseBdev3", 00:15:26.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.768 "is_configured": false, 00:15:26.768 "data_offset": 0, 00:15:26.768 "data_size": 0 00:15:26.768 }, 00:15:26.768 { 00:15:26.768 "name": "BaseBdev4", 00:15:26.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.768 "is_configured": false, 00:15:26.768 "data_offset": 0, 00:15:26.768 "data_size": 0 00:15:26.768 } 00:15:26.768 ] 00:15:26.768 }' 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.768 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.027 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:27.027 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.027 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.288 [2024-11-16 18:56:10.528471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.288 BaseBdev3 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.288 [ 00:15:27.288 { 00:15:27.288 "name": "BaseBdev3", 00:15:27.288 "aliases": [ 00:15:27.288 "83a0edf1-a16f-4768-b879-13a012f54005" 00:15:27.288 ], 00:15:27.288 "product_name": "Malloc disk", 00:15:27.288 "block_size": 512, 00:15:27.288 "num_blocks": 65536, 00:15:27.288 "uuid": "83a0edf1-a16f-4768-b879-13a012f54005", 00:15:27.288 "assigned_rate_limits": { 00:15:27.288 "rw_ios_per_sec": 0, 00:15:27.288 "rw_mbytes_per_sec": 0, 00:15:27.288 "r_mbytes_per_sec": 0, 00:15:27.288 "w_mbytes_per_sec": 0 00:15:27.288 }, 00:15:27.288 "claimed": true, 00:15:27.288 "claim_type": "exclusive_write", 00:15:27.288 "zoned": false, 00:15:27.288 "supported_io_types": { 00:15:27.288 "read": true, 00:15:27.288 "write": true, 00:15:27.288 "unmap": true, 00:15:27.288 "flush": true, 00:15:27.288 "reset": true, 00:15:27.288 "nvme_admin": false, 00:15:27.288 "nvme_io": false, 00:15:27.288 "nvme_io_md": false, 00:15:27.288 "write_zeroes": true, 00:15:27.288 "zcopy": true, 00:15:27.288 "get_zone_info": false, 00:15:27.288 "zone_management": false, 00:15:27.288 "zone_append": false, 00:15:27.288 "compare": false, 00:15:27.288 "compare_and_write": false, 00:15:27.288 "abort": true, 00:15:27.288 "seek_hole": false, 00:15:27.288 "seek_data": false, 00:15:27.288 "copy": true, 00:15:27.288 "nvme_iov_md": false 00:15:27.288 }, 00:15:27.288 "memory_domains": [ 00:15:27.288 { 00:15:27.288 "dma_device_id": "system", 00:15:27.288 "dma_device_type": 1 00:15:27.288 }, 00:15:27.288 { 00:15:27.288 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.288 "dma_device_type": 2 00:15:27.288 } 00:15:27.288 ], 00:15:27.288 "driver_specific": {} 00:15:27.288 } 00:15:27.288 ] 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.288 "name": "Existed_Raid", 00:15:27.288 "uuid": "3a01dbed-f0e8-486e-b5b9-8198fd5114c7", 00:15:27.288 "strip_size_kb": 64, 00:15:27.288 "state": "configuring", 00:15:27.288 "raid_level": "raid5f", 00:15:27.288 "superblock": true, 00:15:27.288 "num_base_bdevs": 4, 00:15:27.288 "num_base_bdevs_discovered": 3, 00:15:27.288 "num_base_bdevs_operational": 4, 00:15:27.288 "base_bdevs_list": [ 00:15:27.288 { 00:15:27.288 "name": "BaseBdev1", 00:15:27.288 "uuid": "cfb025f1-5b79-464b-8bbb-f80d538f2f6a", 00:15:27.288 "is_configured": true, 00:15:27.288 "data_offset": 2048, 00:15:27.288 "data_size": 63488 00:15:27.288 }, 00:15:27.288 { 00:15:27.288 "name": "BaseBdev2", 00:15:27.288 "uuid": "4fdde45a-20b0-44d8-9068-1c1601c44737", 00:15:27.288 "is_configured": true, 00:15:27.288 "data_offset": 2048, 00:15:27.288 "data_size": 63488 00:15:27.288 }, 00:15:27.288 { 00:15:27.288 "name": "BaseBdev3", 00:15:27.288 "uuid": "83a0edf1-a16f-4768-b879-13a012f54005", 00:15:27.288 "is_configured": true, 00:15:27.288 "data_offset": 2048, 00:15:27.288 "data_size": 63488 00:15:27.288 }, 00:15:27.288 { 00:15:27.288 "name": "BaseBdev4", 00:15:27.288 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.288 "is_configured": false, 00:15:27.288 "data_offset": 0, 00:15:27.288 "data_size": 0 00:15:27.288 } 00:15:27.288 ] 00:15:27.288 }' 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.288 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.548 18:56:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:27.548 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.548 18:56:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.548 [2024-11-16 18:56:11.000231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:27.548 [2024-11-16 18:56:11.000598] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:27.548 [2024-11-16 18:56:11.000660] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:27.548 [2024-11-16 18:56:11.000943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:27.548 BaseBdev4 00:15:27.548 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.548 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:15:27.548 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:27.548 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.549 [2024-11-16 18:56:11.008503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:27.549 [2024-11-16 18:56:11.008564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:27.549 [2024-11-16 18:56:11.008870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.549 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.809 [ 00:15:27.809 { 00:15:27.809 "name": "BaseBdev4", 00:15:27.809 "aliases": [ 00:15:27.809 "1a34ef91-10d3-4310-b7ca-d4d446fe2b3b" 00:15:27.809 ], 00:15:27.809 "product_name": "Malloc disk", 00:15:27.809 "block_size": 512, 00:15:27.809 "num_blocks": 65536, 00:15:27.809 "uuid": "1a34ef91-10d3-4310-b7ca-d4d446fe2b3b", 00:15:27.809 "assigned_rate_limits": { 00:15:27.809 "rw_ios_per_sec": 0, 00:15:27.809 "rw_mbytes_per_sec": 0, 00:15:27.809 "r_mbytes_per_sec": 0, 00:15:27.809 "w_mbytes_per_sec": 0 00:15:27.809 }, 00:15:27.809 "claimed": true, 00:15:27.809 "claim_type": "exclusive_write", 00:15:27.809 "zoned": false, 00:15:27.809 "supported_io_types": { 00:15:27.809 "read": true, 00:15:27.809 "write": true, 00:15:27.809 "unmap": true, 00:15:27.809 "flush": true, 00:15:27.809 "reset": true, 00:15:27.809 "nvme_admin": false, 00:15:27.809 "nvme_io": false, 00:15:27.809 "nvme_io_md": false, 00:15:27.809 "write_zeroes": true, 00:15:27.809 "zcopy": true, 00:15:27.809 "get_zone_info": false, 00:15:27.809 "zone_management": false, 00:15:27.809 "zone_append": false, 00:15:27.809 "compare": false, 00:15:27.809 "compare_and_write": false, 00:15:27.809 "abort": true, 00:15:27.809 "seek_hole": false, 00:15:27.809 "seek_data": false, 00:15:27.809 "copy": true, 00:15:27.809 "nvme_iov_md": false 00:15:27.809 }, 00:15:27.809 "memory_domains": [ 00:15:27.809 { 00:15:27.809 "dma_device_id": "system", 00:15:27.809 "dma_device_type": 1 00:15:27.809 }, 00:15:27.809 { 00:15:27.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:27.809 "dma_device_type": 2 00:15:27.809 } 00:15:27.809 ], 00:15:27.809 "driver_specific": {} 00:15:27.809 } 00:15:27.809 ] 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.809 "name": "Existed_Raid", 00:15:27.809 "uuid": "3a01dbed-f0e8-486e-b5b9-8198fd5114c7", 00:15:27.809 "strip_size_kb": 64, 00:15:27.809 "state": "online", 00:15:27.809 "raid_level": "raid5f", 00:15:27.809 "superblock": true, 00:15:27.809 "num_base_bdevs": 4, 00:15:27.809 "num_base_bdevs_discovered": 4, 00:15:27.809 "num_base_bdevs_operational": 4, 00:15:27.809 "base_bdevs_list": [ 00:15:27.809 { 00:15:27.809 "name": "BaseBdev1", 00:15:27.809 "uuid": "cfb025f1-5b79-464b-8bbb-f80d538f2f6a", 00:15:27.809 "is_configured": true, 00:15:27.809 "data_offset": 2048, 00:15:27.809 "data_size": 63488 00:15:27.809 }, 00:15:27.809 { 00:15:27.809 "name": "BaseBdev2", 00:15:27.809 "uuid": "4fdde45a-20b0-44d8-9068-1c1601c44737", 00:15:27.809 "is_configured": true, 00:15:27.809 "data_offset": 2048, 00:15:27.809 "data_size": 63488 00:15:27.809 }, 00:15:27.809 { 00:15:27.809 "name": "BaseBdev3", 00:15:27.809 "uuid": "83a0edf1-a16f-4768-b879-13a012f54005", 00:15:27.809 "is_configured": true, 00:15:27.809 "data_offset": 2048, 00:15:27.809 "data_size": 63488 00:15:27.809 }, 00:15:27.809 { 00:15:27.809 "name": "BaseBdev4", 00:15:27.809 "uuid": "1a34ef91-10d3-4310-b7ca-d4d446fe2b3b", 00:15:27.809 "is_configured": true, 00:15:27.809 "data_offset": 2048, 00:15:27.809 "data_size": 63488 00:15:27.809 } 00:15:27.809 ] 00:15:27.809 }' 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.809 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:28.069 [2024-11-16 18:56:11.468247] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.069 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:28.069 "name": "Existed_Raid", 00:15:28.069 "aliases": [ 00:15:28.069 "3a01dbed-f0e8-486e-b5b9-8198fd5114c7" 00:15:28.069 ], 00:15:28.069 "product_name": "Raid Volume", 00:15:28.069 "block_size": 512, 00:15:28.069 "num_blocks": 190464, 00:15:28.069 "uuid": "3a01dbed-f0e8-486e-b5b9-8198fd5114c7", 00:15:28.069 "assigned_rate_limits": { 00:15:28.069 "rw_ios_per_sec": 0, 00:15:28.069 "rw_mbytes_per_sec": 0, 00:15:28.069 "r_mbytes_per_sec": 0, 00:15:28.069 "w_mbytes_per_sec": 0 00:15:28.069 }, 00:15:28.069 "claimed": false, 00:15:28.069 "zoned": false, 00:15:28.069 "supported_io_types": { 00:15:28.069 "read": true, 00:15:28.069 "write": true, 00:15:28.069 "unmap": false, 00:15:28.069 "flush": false, 00:15:28.069 "reset": true, 00:15:28.069 "nvme_admin": false, 00:15:28.069 "nvme_io": false, 00:15:28.069 "nvme_io_md": false, 00:15:28.069 "write_zeroes": true, 00:15:28.069 "zcopy": false, 00:15:28.069 "get_zone_info": false, 00:15:28.069 "zone_management": false, 00:15:28.069 "zone_append": false, 00:15:28.069 "compare": false, 00:15:28.069 "compare_and_write": false, 00:15:28.069 "abort": false, 00:15:28.069 "seek_hole": false, 00:15:28.069 "seek_data": false, 00:15:28.070 "copy": false, 00:15:28.070 "nvme_iov_md": false 00:15:28.070 }, 00:15:28.070 "driver_specific": { 00:15:28.070 "raid": { 00:15:28.070 "uuid": "3a01dbed-f0e8-486e-b5b9-8198fd5114c7", 00:15:28.070 "strip_size_kb": 64, 00:15:28.070 "state": "online", 00:15:28.070 "raid_level": "raid5f", 00:15:28.070 "superblock": true, 00:15:28.070 "num_base_bdevs": 4, 00:15:28.070 "num_base_bdevs_discovered": 4, 00:15:28.070 "num_base_bdevs_operational": 4, 00:15:28.070 "base_bdevs_list": [ 00:15:28.070 { 00:15:28.070 "name": "BaseBdev1", 00:15:28.070 "uuid": "cfb025f1-5b79-464b-8bbb-f80d538f2f6a", 00:15:28.070 "is_configured": true, 00:15:28.070 "data_offset": 2048, 00:15:28.070 "data_size": 63488 00:15:28.070 }, 00:15:28.070 { 00:15:28.070 "name": "BaseBdev2", 00:15:28.070 "uuid": "4fdde45a-20b0-44d8-9068-1c1601c44737", 00:15:28.070 "is_configured": true, 00:15:28.070 "data_offset": 2048, 00:15:28.070 "data_size": 63488 00:15:28.070 }, 00:15:28.070 { 00:15:28.070 "name": "BaseBdev3", 00:15:28.070 "uuid": "83a0edf1-a16f-4768-b879-13a012f54005", 00:15:28.070 "is_configured": true, 00:15:28.070 "data_offset": 2048, 00:15:28.070 "data_size": 63488 00:15:28.070 }, 00:15:28.070 { 00:15:28.070 "name": "BaseBdev4", 00:15:28.070 "uuid": "1a34ef91-10d3-4310-b7ca-d4d446fe2b3b", 00:15:28.070 "is_configured": true, 00:15:28.070 "data_offset": 2048, 00:15:28.070 "data_size": 63488 00:15:28.070 } 00:15:28.070 ] 00:15:28.070 } 00:15:28.070 } 00:15:28.070 }' 00:15:28.070 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:28.330 BaseBdev2 00:15:28.330 BaseBdev3 00:15:28.330 BaseBdev4' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.330 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.330 [2024-11-16 18:56:11.771567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.590 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.591 "name": "Existed_Raid", 00:15:28.591 "uuid": "3a01dbed-f0e8-486e-b5b9-8198fd5114c7", 00:15:28.591 "strip_size_kb": 64, 00:15:28.591 "state": "online", 00:15:28.591 "raid_level": "raid5f", 00:15:28.591 "superblock": true, 00:15:28.591 "num_base_bdevs": 4, 00:15:28.591 "num_base_bdevs_discovered": 3, 00:15:28.591 "num_base_bdevs_operational": 3, 00:15:28.591 "base_bdevs_list": [ 00:15:28.591 { 00:15:28.591 "name": null, 00:15:28.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.591 "is_configured": false, 00:15:28.591 "data_offset": 0, 00:15:28.591 "data_size": 63488 00:15:28.591 }, 00:15:28.591 { 00:15:28.591 "name": "BaseBdev2", 00:15:28.591 "uuid": "4fdde45a-20b0-44d8-9068-1c1601c44737", 00:15:28.591 "is_configured": true, 00:15:28.591 "data_offset": 2048, 00:15:28.591 "data_size": 63488 00:15:28.591 }, 00:15:28.591 { 00:15:28.591 "name": "BaseBdev3", 00:15:28.591 "uuid": "83a0edf1-a16f-4768-b879-13a012f54005", 00:15:28.591 "is_configured": true, 00:15:28.591 "data_offset": 2048, 00:15:28.591 "data_size": 63488 00:15:28.591 }, 00:15:28.591 { 00:15:28.591 "name": "BaseBdev4", 00:15:28.591 "uuid": "1a34ef91-10d3-4310-b7ca-d4d446fe2b3b", 00:15:28.591 "is_configured": true, 00:15:28.591 "data_offset": 2048, 00:15:28.591 "data_size": 63488 00:15:28.591 } 00:15:28.591 ] 00:15:28.591 }' 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.591 18:56:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.850 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:28.850 [2024-11-16 18:56:12.315877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:28.850 [2024-11-16 18:56:12.316037] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:29.110 [2024-11-16 18:56:12.402964] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.110 [2024-11-16 18:56:12.458886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.110 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.370 [2024-11-16 18:56:12.608662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:29.370 [2024-11-16 18:56:12.608751] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.370 BaseBdev2 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.370 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.370 [ 00:15:29.370 { 00:15:29.370 "name": "BaseBdev2", 00:15:29.370 "aliases": [ 00:15:29.370 "2a9b0cc2-c713-4569-b334-20679e60329c" 00:15:29.370 ], 00:15:29.370 "product_name": "Malloc disk", 00:15:29.370 "block_size": 512, 00:15:29.370 "num_blocks": 65536, 00:15:29.370 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:29.370 "assigned_rate_limits": { 00:15:29.371 "rw_ios_per_sec": 0, 00:15:29.371 "rw_mbytes_per_sec": 0, 00:15:29.371 "r_mbytes_per_sec": 0, 00:15:29.371 "w_mbytes_per_sec": 0 00:15:29.371 }, 00:15:29.371 "claimed": false, 00:15:29.371 "zoned": false, 00:15:29.371 "supported_io_types": { 00:15:29.371 "read": true, 00:15:29.371 "write": true, 00:15:29.371 "unmap": true, 00:15:29.371 "flush": true, 00:15:29.371 "reset": true, 00:15:29.371 "nvme_admin": false, 00:15:29.371 "nvme_io": false, 00:15:29.371 "nvme_io_md": false, 00:15:29.371 "write_zeroes": true, 00:15:29.371 "zcopy": true, 00:15:29.371 "get_zone_info": false, 00:15:29.371 "zone_management": false, 00:15:29.371 "zone_append": false, 00:15:29.371 "compare": false, 00:15:29.371 "compare_and_write": false, 00:15:29.371 "abort": true, 00:15:29.371 "seek_hole": false, 00:15:29.371 "seek_data": false, 00:15:29.371 "copy": true, 00:15:29.371 "nvme_iov_md": false 00:15:29.371 }, 00:15:29.371 "memory_domains": [ 00:15:29.371 { 00:15:29.371 "dma_device_id": "system", 00:15:29.371 "dma_device_type": 1 00:15:29.371 }, 00:15:29.371 { 00:15:29.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.371 "dma_device_type": 2 00:15:29.371 } 00:15:29.371 ], 00:15:29.371 "driver_specific": {} 00:15:29.371 } 00:15:29.371 ] 00:15:29.371 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.371 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:29.371 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.371 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.371 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:29.371 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.371 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.631 BaseBdev3 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.631 [ 00:15:29.631 { 00:15:29.631 "name": "BaseBdev3", 00:15:29.631 "aliases": [ 00:15:29.631 "49b9c38a-26a0-4abd-8e4c-2746362788c1" 00:15:29.631 ], 00:15:29.631 "product_name": "Malloc disk", 00:15:29.631 "block_size": 512, 00:15:29.631 "num_blocks": 65536, 00:15:29.631 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:29.631 "assigned_rate_limits": { 00:15:29.631 "rw_ios_per_sec": 0, 00:15:29.631 "rw_mbytes_per_sec": 0, 00:15:29.631 "r_mbytes_per_sec": 0, 00:15:29.631 "w_mbytes_per_sec": 0 00:15:29.631 }, 00:15:29.631 "claimed": false, 00:15:29.631 "zoned": false, 00:15:29.631 "supported_io_types": { 00:15:29.631 "read": true, 00:15:29.631 "write": true, 00:15:29.631 "unmap": true, 00:15:29.631 "flush": true, 00:15:29.631 "reset": true, 00:15:29.631 "nvme_admin": false, 00:15:29.631 "nvme_io": false, 00:15:29.631 "nvme_io_md": false, 00:15:29.631 "write_zeroes": true, 00:15:29.631 "zcopy": true, 00:15:29.631 "get_zone_info": false, 00:15:29.631 "zone_management": false, 00:15:29.631 "zone_append": false, 00:15:29.631 "compare": false, 00:15:29.631 "compare_and_write": false, 00:15:29.631 "abort": true, 00:15:29.631 "seek_hole": false, 00:15:29.631 "seek_data": false, 00:15:29.631 "copy": true, 00:15:29.631 "nvme_iov_md": false 00:15:29.631 }, 00:15:29.631 "memory_domains": [ 00:15:29.631 { 00:15:29.631 "dma_device_id": "system", 00:15:29.631 "dma_device_type": 1 00:15:29.631 }, 00:15:29.631 { 00:15:29.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.631 "dma_device_type": 2 00:15:29.631 } 00:15:29.631 ], 00:15:29.631 "driver_specific": {} 00:15:29.631 } 00:15:29.631 ] 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.631 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.631 BaseBdev4 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.632 [ 00:15:29.632 { 00:15:29.632 "name": "BaseBdev4", 00:15:29.632 "aliases": [ 00:15:29.632 "5f430d0f-4702-4b33-8718-4a70b35f7f36" 00:15:29.632 ], 00:15:29.632 "product_name": "Malloc disk", 00:15:29.632 "block_size": 512, 00:15:29.632 "num_blocks": 65536, 00:15:29.632 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:29.632 "assigned_rate_limits": { 00:15:29.632 "rw_ios_per_sec": 0, 00:15:29.632 "rw_mbytes_per_sec": 0, 00:15:29.632 "r_mbytes_per_sec": 0, 00:15:29.632 "w_mbytes_per_sec": 0 00:15:29.632 }, 00:15:29.632 "claimed": false, 00:15:29.632 "zoned": false, 00:15:29.632 "supported_io_types": { 00:15:29.632 "read": true, 00:15:29.632 "write": true, 00:15:29.632 "unmap": true, 00:15:29.632 "flush": true, 00:15:29.632 "reset": true, 00:15:29.632 "nvme_admin": false, 00:15:29.632 "nvme_io": false, 00:15:29.632 "nvme_io_md": false, 00:15:29.632 "write_zeroes": true, 00:15:29.632 "zcopy": true, 00:15:29.632 "get_zone_info": false, 00:15:29.632 "zone_management": false, 00:15:29.632 "zone_append": false, 00:15:29.632 "compare": false, 00:15:29.632 "compare_and_write": false, 00:15:29.632 "abort": true, 00:15:29.632 "seek_hole": false, 00:15:29.632 "seek_data": false, 00:15:29.632 "copy": true, 00:15:29.632 "nvme_iov_md": false 00:15:29.632 }, 00:15:29.632 "memory_domains": [ 00:15:29.632 { 00:15:29.632 "dma_device_id": "system", 00:15:29.632 "dma_device_type": 1 00:15:29.632 }, 00:15:29.632 { 00:15:29.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.632 "dma_device_type": 2 00:15:29.632 } 00:15:29.632 ], 00:15:29.632 "driver_specific": {} 00:15:29.632 } 00:15:29.632 ] 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.632 [2024-11-16 18:56:12.984222] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:29.632 [2024-11-16 18:56:12.984307] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:29.632 [2024-11-16 18:56:12.984346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:29.632 [2024-11-16 18:56:12.986087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.632 [2024-11-16 18:56:12.986191] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.632 18:56:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.632 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.632 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.632 "name": "Existed_Raid", 00:15:29.632 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:29.632 "strip_size_kb": 64, 00:15:29.632 "state": "configuring", 00:15:29.632 "raid_level": "raid5f", 00:15:29.632 "superblock": true, 00:15:29.632 "num_base_bdevs": 4, 00:15:29.632 "num_base_bdevs_discovered": 3, 00:15:29.632 "num_base_bdevs_operational": 4, 00:15:29.632 "base_bdevs_list": [ 00:15:29.632 { 00:15:29.632 "name": "BaseBdev1", 00:15:29.632 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.632 "is_configured": false, 00:15:29.632 "data_offset": 0, 00:15:29.632 "data_size": 0 00:15:29.632 }, 00:15:29.632 { 00:15:29.632 "name": "BaseBdev2", 00:15:29.632 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:29.632 "is_configured": true, 00:15:29.632 "data_offset": 2048, 00:15:29.632 "data_size": 63488 00:15:29.632 }, 00:15:29.632 { 00:15:29.632 "name": "BaseBdev3", 00:15:29.632 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:29.632 "is_configured": true, 00:15:29.632 "data_offset": 2048, 00:15:29.632 "data_size": 63488 00:15:29.632 }, 00:15:29.632 { 00:15:29.632 "name": "BaseBdev4", 00:15:29.632 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:29.632 "is_configured": true, 00:15:29.632 "data_offset": 2048, 00:15:29.632 "data_size": 63488 00:15:29.632 } 00:15:29.632 ] 00:15:29.632 }' 00:15:29.632 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.632 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.892 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:29.892 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.892 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.892 [2024-11-16 18:56:13.299736] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:29.892 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.892 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:29.892 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:29.892 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.893 "name": "Existed_Raid", 00:15:29.893 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:29.893 "strip_size_kb": 64, 00:15:29.893 "state": "configuring", 00:15:29.893 "raid_level": "raid5f", 00:15:29.893 "superblock": true, 00:15:29.893 "num_base_bdevs": 4, 00:15:29.893 "num_base_bdevs_discovered": 2, 00:15:29.893 "num_base_bdevs_operational": 4, 00:15:29.893 "base_bdevs_list": [ 00:15:29.893 { 00:15:29.893 "name": "BaseBdev1", 00:15:29.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.893 "is_configured": false, 00:15:29.893 "data_offset": 0, 00:15:29.893 "data_size": 0 00:15:29.893 }, 00:15:29.893 { 00:15:29.893 "name": null, 00:15:29.893 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:29.893 "is_configured": false, 00:15:29.893 "data_offset": 0, 00:15:29.893 "data_size": 63488 00:15:29.893 }, 00:15:29.893 { 00:15:29.893 "name": "BaseBdev3", 00:15:29.893 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:29.893 "is_configured": true, 00:15:29.893 "data_offset": 2048, 00:15:29.893 "data_size": 63488 00:15:29.893 }, 00:15:29.893 { 00:15:29.893 "name": "BaseBdev4", 00:15:29.893 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:29.893 "is_configured": true, 00:15:29.893 "data_offset": 2048, 00:15:29.893 "data_size": 63488 00:15:29.893 } 00:15:29.893 ] 00:15:29.893 }' 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.893 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.463 [2024-11-16 18:56:13.798436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:30.463 BaseBdev1 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.463 [ 00:15:30.463 { 00:15:30.463 "name": "BaseBdev1", 00:15:30.463 "aliases": [ 00:15:30.463 "8b5149c3-5b11-485f-8721-1575f91e1c1f" 00:15:30.463 ], 00:15:30.463 "product_name": "Malloc disk", 00:15:30.463 "block_size": 512, 00:15:30.463 "num_blocks": 65536, 00:15:30.463 "uuid": "8b5149c3-5b11-485f-8721-1575f91e1c1f", 00:15:30.463 "assigned_rate_limits": { 00:15:30.463 "rw_ios_per_sec": 0, 00:15:30.463 "rw_mbytes_per_sec": 0, 00:15:30.463 "r_mbytes_per_sec": 0, 00:15:30.463 "w_mbytes_per_sec": 0 00:15:30.463 }, 00:15:30.463 "claimed": true, 00:15:30.463 "claim_type": "exclusive_write", 00:15:30.463 "zoned": false, 00:15:30.463 "supported_io_types": { 00:15:30.463 "read": true, 00:15:30.463 "write": true, 00:15:30.463 "unmap": true, 00:15:30.463 "flush": true, 00:15:30.463 "reset": true, 00:15:30.463 "nvme_admin": false, 00:15:30.463 "nvme_io": false, 00:15:30.463 "nvme_io_md": false, 00:15:30.463 "write_zeroes": true, 00:15:30.463 "zcopy": true, 00:15:30.463 "get_zone_info": false, 00:15:30.463 "zone_management": false, 00:15:30.463 "zone_append": false, 00:15:30.463 "compare": false, 00:15:30.463 "compare_and_write": false, 00:15:30.463 "abort": true, 00:15:30.463 "seek_hole": false, 00:15:30.463 "seek_data": false, 00:15:30.463 "copy": true, 00:15:30.463 "nvme_iov_md": false 00:15:30.463 }, 00:15:30.463 "memory_domains": [ 00:15:30.463 { 00:15:30.463 "dma_device_id": "system", 00:15:30.463 "dma_device_type": 1 00:15:30.463 }, 00:15:30.463 { 00:15:30.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:30.463 "dma_device_type": 2 00:15:30.463 } 00:15:30.463 ], 00:15:30.463 "driver_specific": {} 00:15:30.463 } 00:15:30.463 ] 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.463 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.463 "name": "Existed_Raid", 00:15:30.463 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:30.463 "strip_size_kb": 64, 00:15:30.463 "state": "configuring", 00:15:30.463 "raid_level": "raid5f", 00:15:30.464 "superblock": true, 00:15:30.464 "num_base_bdevs": 4, 00:15:30.464 "num_base_bdevs_discovered": 3, 00:15:30.464 "num_base_bdevs_operational": 4, 00:15:30.464 "base_bdevs_list": [ 00:15:30.464 { 00:15:30.464 "name": "BaseBdev1", 00:15:30.464 "uuid": "8b5149c3-5b11-485f-8721-1575f91e1c1f", 00:15:30.464 "is_configured": true, 00:15:30.464 "data_offset": 2048, 00:15:30.464 "data_size": 63488 00:15:30.464 }, 00:15:30.464 { 00:15:30.464 "name": null, 00:15:30.464 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:30.464 "is_configured": false, 00:15:30.464 "data_offset": 0, 00:15:30.464 "data_size": 63488 00:15:30.464 }, 00:15:30.464 { 00:15:30.464 "name": "BaseBdev3", 00:15:30.464 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:30.464 "is_configured": true, 00:15:30.464 "data_offset": 2048, 00:15:30.464 "data_size": 63488 00:15:30.464 }, 00:15:30.464 { 00:15:30.464 "name": "BaseBdev4", 00:15:30.464 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:30.464 "is_configured": true, 00:15:30.464 "data_offset": 2048, 00:15:30.464 "data_size": 63488 00:15:30.464 } 00:15:30.464 ] 00:15:30.464 }' 00:15:30.464 18:56:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.464 18:56:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.034 [2024-11-16 18:56:14.261704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.034 "name": "Existed_Raid", 00:15:31.034 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:31.034 "strip_size_kb": 64, 00:15:31.034 "state": "configuring", 00:15:31.034 "raid_level": "raid5f", 00:15:31.034 "superblock": true, 00:15:31.034 "num_base_bdevs": 4, 00:15:31.034 "num_base_bdevs_discovered": 2, 00:15:31.034 "num_base_bdevs_operational": 4, 00:15:31.034 "base_bdevs_list": [ 00:15:31.034 { 00:15:31.034 "name": "BaseBdev1", 00:15:31.034 "uuid": "8b5149c3-5b11-485f-8721-1575f91e1c1f", 00:15:31.034 "is_configured": true, 00:15:31.034 "data_offset": 2048, 00:15:31.034 "data_size": 63488 00:15:31.034 }, 00:15:31.034 { 00:15:31.034 "name": null, 00:15:31.034 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:31.034 "is_configured": false, 00:15:31.034 "data_offset": 0, 00:15:31.034 "data_size": 63488 00:15:31.034 }, 00:15:31.034 { 00:15:31.034 "name": null, 00:15:31.034 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:31.034 "is_configured": false, 00:15:31.034 "data_offset": 0, 00:15:31.034 "data_size": 63488 00:15:31.034 }, 00:15:31.034 { 00:15:31.034 "name": "BaseBdev4", 00:15:31.034 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:31.034 "is_configured": true, 00:15:31.034 "data_offset": 2048, 00:15:31.034 "data_size": 63488 00:15:31.034 } 00:15:31.034 ] 00:15:31.034 }' 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.034 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.293 [2024-11-16 18:56:14.720906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.293 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.294 "name": "Existed_Raid", 00:15:31.294 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:31.294 "strip_size_kb": 64, 00:15:31.294 "state": "configuring", 00:15:31.294 "raid_level": "raid5f", 00:15:31.294 "superblock": true, 00:15:31.294 "num_base_bdevs": 4, 00:15:31.294 "num_base_bdevs_discovered": 3, 00:15:31.294 "num_base_bdevs_operational": 4, 00:15:31.294 "base_bdevs_list": [ 00:15:31.294 { 00:15:31.294 "name": "BaseBdev1", 00:15:31.294 "uuid": "8b5149c3-5b11-485f-8721-1575f91e1c1f", 00:15:31.294 "is_configured": true, 00:15:31.294 "data_offset": 2048, 00:15:31.294 "data_size": 63488 00:15:31.294 }, 00:15:31.294 { 00:15:31.294 "name": null, 00:15:31.294 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:31.294 "is_configured": false, 00:15:31.294 "data_offset": 0, 00:15:31.294 "data_size": 63488 00:15:31.294 }, 00:15:31.294 { 00:15:31.294 "name": "BaseBdev3", 00:15:31.294 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:31.294 "is_configured": true, 00:15:31.294 "data_offset": 2048, 00:15:31.294 "data_size": 63488 00:15:31.294 }, 00:15:31.294 { 00:15:31.294 "name": "BaseBdev4", 00:15:31.294 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:31.294 "is_configured": true, 00:15:31.294 "data_offset": 2048, 00:15:31.294 "data_size": 63488 00:15:31.294 } 00:15:31.294 ] 00:15:31.294 }' 00:15:31.294 18:56:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.294 18:56:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.863 [2024-11-16 18:56:15.144193] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:31.863 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.864 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:31.864 "name": "Existed_Raid", 00:15:31.864 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:31.864 "strip_size_kb": 64, 00:15:31.864 "state": "configuring", 00:15:31.864 "raid_level": "raid5f", 00:15:31.864 "superblock": true, 00:15:31.864 "num_base_bdevs": 4, 00:15:31.864 "num_base_bdevs_discovered": 2, 00:15:31.864 "num_base_bdevs_operational": 4, 00:15:31.864 "base_bdevs_list": [ 00:15:31.864 { 00:15:31.864 "name": null, 00:15:31.864 "uuid": "8b5149c3-5b11-485f-8721-1575f91e1c1f", 00:15:31.864 "is_configured": false, 00:15:31.864 "data_offset": 0, 00:15:31.864 "data_size": 63488 00:15:31.864 }, 00:15:31.864 { 00:15:31.864 "name": null, 00:15:31.864 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:31.864 "is_configured": false, 00:15:31.864 "data_offset": 0, 00:15:31.864 "data_size": 63488 00:15:31.864 }, 00:15:31.864 { 00:15:31.864 "name": "BaseBdev3", 00:15:31.864 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:31.864 "is_configured": true, 00:15:31.864 "data_offset": 2048, 00:15:31.864 "data_size": 63488 00:15:31.864 }, 00:15:31.864 { 00:15:31.864 "name": "BaseBdev4", 00:15:31.864 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:31.864 "is_configured": true, 00:15:31.864 "data_offset": 2048, 00:15:31.864 "data_size": 63488 00:15:31.864 } 00:15:31.864 ] 00:15:31.864 }' 00:15:31.864 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:31.864 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.434 [2024-11-16 18:56:15.677664] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.434 "name": "Existed_Raid", 00:15:32.434 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:32.434 "strip_size_kb": 64, 00:15:32.434 "state": "configuring", 00:15:32.434 "raid_level": "raid5f", 00:15:32.434 "superblock": true, 00:15:32.434 "num_base_bdevs": 4, 00:15:32.434 "num_base_bdevs_discovered": 3, 00:15:32.434 "num_base_bdevs_operational": 4, 00:15:32.434 "base_bdevs_list": [ 00:15:32.434 { 00:15:32.434 "name": null, 00:15:32.434 "uuid": "8b5149c3-5b11-485f-8721-1575f91e1c1f", 00:15:32.434 "is_configured": false, 00:15:32.434 "data_offset": 0, 00:15:32.434 "data_size": 63488 00:15:32.434 }, 00:15:32.434 { 00:15:32.434 "name": "BaseBdev2", 00:15:32.434 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:32.434 "is_configured": true, 00:15:32.434 "data_offset": 2048, 00:15:32.434 "data_size": 63488 00:15:32.434 }, 00:15:32.434 { 00:15:32.434 "name": "BaseBdev3", 00:15:32.434 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:32.434 "is_configured": true, 00:15:32.434 "data_offset": 2048, 00:15:32.434 "data_size": 63488 00:15:32.434 }, 00:15:32.434 { 00:15:32.434 "name": "BaseBdev4", 00:15:32.434 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:32.434 "is_configured": true, 00:15:32.434 "data_offset": 2048, 00:15:32.434 "data_size": 63488 00:15:32.434 } 00:15:32.434 ] 00:15:32.434 }' 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.434 18:56:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.694 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:32.694 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.694 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.694 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.694 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 8b5149c3-5b11-485f-8721-1575f91e1c1f 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.955 [2024-11-16 18:56:16.272439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:32.955 [2024-11-16 18:56:16.272678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:32.955 [2024-11-16 18:56:16.272691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:32.955 [2024-11-16 18:56:16.272930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:32.955 NewBaseBdev 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.955 [2024-11-16 18:56:16.279629] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:32.955 [2024-11-16 18:56:16.279716] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:32.955 [2024-11-16 18:56:16.279913] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.955 [ 00:15:32.955 { 00:15:32.955 "name": "NewBaseBdev", 00:15:32.955 "aliases": [ 00:15:32.955 "8b5149c3-5b11-485f-8721-1575f91e1c1f" 00:15:32.955 ], 00:15:32.955 "product_name": "Malloc disk", 00:15:32.955 "block_size": 512, 00:15:32.955 "num_blocks": 65536, 00:15:32.955 "uuid": "8b5149c3-5b11-485f-8721-1575f91e1c1f", 00:15:32.955 "assigned_rate_limits": { 00:15:32.955 "rw_ios_per_sec": 0, 00:15:32.955 "rw_mbytes_per_sec": 0, 00:15:32.955 "r_mbytes_per_sec": 0, 00:15:32.955 "w_mbytes_per_sec": 0 00:15:32.955 }, 00:15:32.955 "claimed": true, 00:15:32.955 "claim_type": "exclusive_write", 00:15:32.955 "zoned": false, 00:15:32.955 "supported_io_types": { 00:15:32.955 "read": true, 00:15:32.955 "write": true, 00:15:32.955 "unmap": true, 00:15:32.955 "flush": true, 00:15:32.955 "reset": true, 00:15:32.955 "nvme_admin": false, 00:15:32.955 "nvme_io": false, 00:15:32.955 "nvme_io_md": false, 00:15:32.955 "write_zeroes": true, 00:15:32.955 "zcopy": true, 00:15:32.955 "get_zone_info": false, 00:15:32.955 "zone_management": false, 00:15:32.955 "zone_append": false, 00:15:32.955 "compare": false, 00:15:32.955 "compare_and_write": false, 00:15:32.955 "abort": true, 00:15:32.955 "seek_hole": false, 00:15:32.955 "seek_data": false, 00:15:32.955 "copy": true, 00:15:32.955 "nvme_iov_md": false 00:15:32.955 }, 00:15:32.955 "memory_domains": [ 00:15:32.955 { 00:15:32.955 "dma_device_id": "system", 00:15:32.955 "dma_device_type": 1 00:15:32.955 }, 00:15:32.955 { 00:15:32.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.955 "dma_device_type": 2 00:15:32.955 } 00:15:32.955 ], 00:15:32.955 "driver_specific": {} 00:15:32.955 } 00:15:32.955 ] 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.955 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.956 "name": "Existed_Raid", 00:15:32.956 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:32.956 "strip_size_kb": 64, 00:15:32.956 "state": "online", 00:15:32.956 "raid_level": "raid5f", 00:15:32.956 "superblock": true, 00:15:32.956 "num_base_bdevs": 4, 00:15:32.956 "num_base_bdevs_discovered": 4, 00:15:32.956 "num_base_bdevs_operational": 4, 00:15:32.956 "base_bdevs_list": [ 00:15:32.956 { 00:15:32.956 "name": "NewBaseBdev", 00:15:32.956 "uuid": "8b5149c3-5b11-485f-8721-1575f91e1c1f", 00:15:32.956 "is_configured": true, 00:15:32.956 "data_offset": 2048, 00:15:32.956 "data_size": 63488 00:15:32.956 }, 00:15:32.956 { 00:15:32.956 "name": "BaseBdev2", 00:15:32.956 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:32.956 "is_configured": true, 00:15:32.956 "data_offset": 2048, 00:15:32.956 "data_size": 63488 00:15:32.956 }, 00:15:32.956 { 00:15:32.956 "name": "BaseBdev3", 00:15:32.956 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:32.956 "is_configured": true, 00:15:32.956 "data_offset": 2048, 00:15:32.956 "data_size": 63488 00:15:32.956 }, 00:15:32.956 { 00:15:32.956 "name": "BaseBdev4", 00:15:32.956 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:32.956 "is_configured": true, 00:15:32.956 "data_offset": 2048, 00:15:32.956 "data_size": 63488 00:15:32.956 } 00:15:32.956 ] 00:15:32.956 }' 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.956 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.216 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:33.476 [2024-11-16 18:56:16.691401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:33.476 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.476 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:33.476 "name": "Existed_Raid", 00:15:33.476 "aliases": [ 00:15:33.476 "35e8420d-7c1d-4650-8200-105e6802b1b4" 00:15:33.476 ], 00:15:33.476 "product_name": "Raid Volume", 00:15:33.476 "block_size": 512, 00:15:33.476 "num_blocks": 190464, 00:15:33.476 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:33.476 "assigned_rate_limits": { 00:15:33.476 "rw_ios_per_sec": 0, 00:15:33.476 "rw_mbytes_per_sec": 0, 00:15:33.476 "r_mbytes_per_sec": 0, 00:15:33.476 "w_mbytes_per_sec": 0 00:15:33.476 }, 00:15:33.476 "claimed": false, 00:15:33.476 "zoned": false, 00:15:33.476 "supported_io_types": { 00:15:33.476 "read": true, 00:15:33.476 "write": true, 00:15:33.476 "unmap": false, 00:15:33.476 "flush": false, 00:15:33.476 "reset": true, 00:15:33.476 "nvme_admin": false, 00:15:33.476 "nvme_io": false, 00:15:33.476 "nvme_io_md": false, 00:15:33.476 "write_zeroes": true, 00:15:33.476 "zcopy": false, 00:15:33.476 "get_zone_info": false, 00:15:33.476 "zone_management": false, 00:15:33.476 "zone_append": false, 00:15:33.476 "compare": false, 00:15:33.476 "compare_and_write": false, 00:15:33.476 "abort": false, 00:15:33.476 "seek_hole": false, 00:15:33.476 "seek_data": false, 00:15:33.476 "copy": false, 00:15:33.476 "nvme_iov_md": false 00:15:33.476 }, 00:15:33.476 "driver_specific": { 00:15:33.476 "raid": { 00:15:33.476 "uuid": "35e8420d-7c1d-4650-8200-105e6802b1b4", 00:15:33.476 "strip_size_kb": 64, 00:15:33.476 "state": "online", 00:15:33.476 "raid_level": "raid5f", 00:15:33.476 "superblock": true, 00:15:33.476 "num_base_bdevs": 4, 00:15:33.476 "num_base_bdevs_discovered": 4, 00:15:33.476 "num_base_bdevs_operational": 4, 00:15:33.476 "base_bdevs_list": [ 00:15:33.476 { 00:15:33.476 "name": "NewBaseBdev", 00:15:33.476 "uuid": "8b5149c3-5b11-485f-8721-1575f91e1c1f", 00:15:33.476 "is_configured": true, 00:15:33.476 "data_offset": 2048, 00:15:33.476 "data_size": 63488 00:15:33.476 }, 00:15:33.476 { 00:15:33.476 "name": "BaseBdev2", 00:15:33.476 "uuid": "2a9b0cc2-c713-4569-b334-20679e60329c", 00:15:33.476 "is_configured": true, 00:15:33.476 "data_offset": 2048, 00:15:33.476 "data_size": 63488 00:15:33.476 }, 00:15:33.476 { 00:15:33.476 "name": "BaseBdev3", 00:15:33.476 "uuid": "49b9c38a-26a0-4abd-8e4c-2746362788c1", 00:15:33.476 "is_configured": true, 00:15:33.477 "data_offset": 2048, 00:15:33.477 "data_size": 63488 00:15:33.477 }, 00:15:33.477 { 00:15:33.477 "name": "BaseBdev4", 00:15:33.477 "uuid": "5f430d0f-4702-4b33-8718-4a70b35f7f36", 00:15:33.477 "is_configured": true, 00:15:33.477 "data_offset": 2048, 00:15:33.477 "data_size": 63488 00:15:33.477 } 00:15:33.477 ] 00:15:33.477 } 00:15:33.477 } 00:15:33.477 }' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:33.477 BaseBdev2 00:15:33.477 BaseBdev3 00:15:33.477 BaseBdev4' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.477 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:33.737 [2024-11-16 18:56:16.958715] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:33.737 [2024-11-16 18:56:16.958782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:33.737 [2024-11-16 18:56:16.958863] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:33.737 [2024-11-16 18:56:16.959159] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:33.737 [2024-11-16 18:56:16.959211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83110 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83110 ']' 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83110 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83110 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83110' 00:15:33.737 killing process with pid 83110 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83110 00:15:33.737 [2024-11-16 18:56:16.993539] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:33.737 18:56:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83110 00:15:33.997 [2024-11-16 18:56:17.359510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:34.936 18:56:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:34.936 00:15:34.936 real 0m10.648s 00:15:34.936 user 0m16.879s 00:15:34.936 sys 0m1.865s 00:15:34.936 18:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:34.936 18:56:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:34.936 ************************************ 00:15:34.936 END TEST raid5f_state_function_test_sb 00:15:34.936 ************************************ 00:15:35.196 18:56:18 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:15:35.196 18:56:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:35.196 18:56:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:35.196 18:56:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:35.196 ************************************ 00:15:35.196 START TEST raid5f_superblock_test 00:15:35.196 ************************************ 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83770 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83770 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83770 ']' 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:35.196 18:56:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.196 [2024-11-16 18:56:18.557708] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:35.196 [2024-11-16 18:56:18.557901] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83770 ] 00:15:35.456 [2024-11-16 18:56:18.729509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.456 [2024-11-16 18:56:18.837356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:35.716 [2024-11-16 18:56:19.020463] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.716 [2024-11-16 18:56:19.020593] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.974 malloc1 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.974 [2024-11-16 18:56:19.410143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:35.974 [2024-11-16 18:56:19.410258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.974 [2024-11-16 18:56:19.410299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:35.974 [2024-11-16 18:56:19.410327] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.974 [2024-11-16 18:56:19.412356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.974 [2024-11-16 18:56:19.412444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:35.974 pt1 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.974 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 malloc2 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 [2024-11-16 18:56:19.467103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:36.235 [2024-11-16 18:56:19.467203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.235 [2024-11-16 18:56:19.467227] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:36.235 [2024-11-16 18:56:19.467235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.235 [2024-11-16 18:56:19.469250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.235 [2024-11-16 18:56:19.469285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:36.235 pt2 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 malloc3 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 [2024-11-16 18:56:19.555820] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:36.235 [2024-11-16 18:56:19.555904] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.235 [2024-11-16 18:56:19.555948] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:36.235 [2024-11-16 18:56:19.555985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.235 [2024-11-16 18:56:19.557997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.235 [2024-11-16 18:56:19.558057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:36.235 pt3 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 malloc4 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 [2024-11-16 18:56:19.613735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:36.235 [2024-11-16 18:56:19.613817] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:36.235 [2024-11-16 18:56:19.613850] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:36.235 [2024-11-16 18:56:19.613876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:36.235 [2024-11-16 18:56:19.615849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:36.235 [2024-11-16 18:56:19.615908] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:36.235 pt4 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.235 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.235 [2024-11-16 18:56:19.625735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:36.235 [2024-11-16 18:56:19.627430] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:36.236 [2024-11-16 18:56:19.627490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:36.236 [2024-11-16 18:56:19.627547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:36.236 [2024-11-16 18:56:19.627746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:36.236 [2024-11-16 18:56:19.627761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:36.236 [2024-11-16 18:56:19.628006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:36.236 [2024-11-16 18:56:19.634965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:36.236 [2024-11-16 18:56:19.634986] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:36.236 [2024-11-16 18:56:19.635147] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.236 "name": "raid_bdev1", 00:15:36.236 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:36.236 "strip_size_kb": 64, 00:15:36.236 "state": "online", 00:15:36.236 "raid_level": "raid5f", 00:15:36.236 "superblock": true, 00:15:36.236 "num_base_bdevs": 4, 00:15:36.236 "num_base_bdevs_discovered": 4, 00:15:36.236 "num_base_bdevs_operational": 4, 00:15:36.236 "base_bdevs_list": [ 00:15:36.236 { 00:15:36.236 "name": "pt1", 00:15:36.236 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.236 "is_configured": true, 00:15:36.236 "data_offset": 2048, 00:15:36.236 "data_size": 63488 00:15:36.236 }, 00:15:36.236 { 00:15:36.236 "name": "pt2", 00:15:36.236 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.236 "is_configured": true, 00:15:36.236 "data_offset": 2048, 00:15:36.236 "data_size": 63488 00:15:36.236 }, 00:15:36.236 { 00:15:36.236 "name": "pt3", 00:15:36.236 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.236 "is_configured": true, 00:15:36.236 "data_offset": 2048, 00:15:36.236 "data_size": 63488 00:15:36.236 }, 00:15:36.236 { 00:15:36.236 "name": "pt4", 00:15:36.236 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:36.236 "is_configured": true, 00:15:36.236 "data_offset": 2048, 00:15:36.236 "data_size": 63488 00:15:36.236 } 00:15:36.236 ] 00:15:36.236 }' 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.236 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.807 18:56:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.807 [2024-11-16 18:56:19.986900] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:36.807 "name": "raid_bdev1", 00:15:36.807 "aliases": [ 00:15:36.807 "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e" 00:15:36.807 ], 00:15:36.807 "product_name": "Raid Volume", 00:15:36.807 "block_size": 512, 00:15:36.807 "num_blocks": 190464, 00:15:36.807 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:36.807 "assigned_rate_limits": { 00:15:36.807 "rw_ios_per_sec": 0, 00:15:36.807 "rw_mbytes_per_sec": 0, 00:15:36.807 "r_mbytes_per_sec": 0, 00:15:36.807 "w_mbytes_per_sec": 0 00:15:36.807 }, 00:15:36.807 "claimed": false, 00:15:36.807 "zoned": false, 00:15:36.807 "supported_io_types": { 00:15:36.807 "read": true, 00:15:36.807 "write": true, 00:15:36.807 "unmap": false, 00:15:36.807 "flush": false, 00:15:36.807 "reset": true, 00:15:36.807 "nvme_admin": false, 00:15:36.807 "nvme_io": false, 00:15:36.807 "nvme_io_md": false, 00:15:36.807 "write_zeroes": true, 00:15:36.807 "zcopy": false, 00:15:36.807 "get_zone_info": false, 00:15:36.807 "zone_management": false, 00:15:36.807 "zone_append": false, 00:15:36.807 "compare": false, 00:15:36.807 "compare_and_write": false, 00:15:36.807 "abort": false, 00:15:36.807 "seek_hole": false, 00:15:36.807 "seek_data": false, 00:15:36.807 "copy": false, 00:15:36.807 "nvme_iov_md": false 00:15:36.807 }, 00:15:36.807 "driver_specific": { 00:15:36.807 "raid": { 00:15:36.807 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:36.807 "strip_size_kb": 64, 00:15:36.807 "state": "online", 00:15:36.807 "raid_level": "raid5f", 00:15:36.807 "superblock": true, 00:15:36.807 "num_base_bdevs": 4, 00:15:36.807 "num_base_bdevs_discovered": 4, 00:15:36.807 "num_base_bdevs_operational": 4, 00:15:36.807 "base_bdevs_list": [ 00:15:36.807 { 00:15:36.807 "name": "pt1", 00:15:36.807 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:36.807 "is_configured": true, 00:15:36.807 "data_offset": 2048, 00:15:36.807 "data_size": 63488 00:15:36.807 }, 00:15:36.807 { 00:15:36.807 "name": "pt2", 00:15:36.807 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:36.807 "is_configured": true, 00:15:36.807 "data_offset": 2048, 00:15:36.807 "data_size": 63488 00:15:36.807 }, 00:15:36.807 { 00:15:36.807 "name": "pt3", 00:15:36.807 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:36.807 "is_configured": true, 00:15:36.807 "data_offset": 2048, 00:15:36.807 "data_size": 63488 00:15:36.807 }, 00:15:36.807 { 00:15:36.807 "name": "pt4", 00:15:36.807 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:36.807 "is_configured": true, 00:15:36.807 "data_offset": 2048, 00:15:36.807 "data_size": 63488 00:15:36.807 } 00:15:36.807 ] 00:15:36.807 } 00:15:36.807 } 00:15:36.807 }' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:36.807 pt2 00:15:36.807 pt3 00:15:36.807 pt4' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:36.807 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:36.808 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.808 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.808 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.069 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:37.070 [2024-11-16 18:56:20.286323] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9768d46a-162a-4fcc-bec4-fc4e76dd7b7e 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9768d46a-162a-4fcc-bec4-fc4e76dd7b7e ']' 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 [2024-11-16 18:56:20.334083] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.070 [2024-11-16 18:56:20.334105] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:37.070 [2024-11-16 18:56:20.334172] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:37.070 [2024-11-16 18:56:20.334250] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:37.070 [2024-11-16 18:56:20.334263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 [2024-11-16 18:56:20.501806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:37.070 [2024-11-16 18:56:20.503649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:37.070 [2024-11-16 18:56:20.503725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:37.070 [2024-11-16 18:56:20.503757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:37.070 [2024-11-16 18:56:20.503804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:37.070 [2024-11-16 18:56:20.503843] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:37.070 [2024-11-16 18:56:20.503861] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:37.070 [2024-11-16 18:56:20.503879] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:37.070 [2024-11-16 18:56:20.503891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:37.070 [2024-11-16 18:56:20.503900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:37.070 request: 00:15:37.070 { 00:15:37.070 "name": "raid_bdev1", 00:15:37.070 "raid_level": "raid5f", 00:15:37.070 "base_bdevs": [ 00:15:37.070 "malloc1", 00:15:37.070 "malloc2", 00:15:37.070 "malloc3", 00:15:37.070 "malloc4" 00:15:37.070 ], 00:15:37.070 "strip_size_kb": 64, 00:15:37.070 "superblock": false, 00:15:37.070 "method": "bdev_raid_create", 00:15:37.070 "req_id": 1 00:15:37.070 } 00:15:37.070 Got JSON-RPC error response 00:15:37.070 response: 00:15:37.070 { 00:15:37.070 "code": -17, 00:15:37.070 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:37.070 } 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:37.070 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.331 [2024-11-16 18:56:20.565682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:37.331 [2024-11-16 18:56:20.565764] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.331 [2024-11-16 18:56:20.565795] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:37.331 [2024-11-16 18:56:20.565823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.331 [2024-11-16 18:56:20.567961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.331 [2024-11-16 18:56:20.568055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:37.331 [2024-11-16 18:56:20.568142] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:37.331 [2024-11-16 18:56:20.568219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:37.331 pt1 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.331 "name": "raid_bdev1", 00:15:37.331 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:37.331 "strip_size_kb": 64, 00:15:37.331 "state": "configuring", 00:15:37.331 "raid_level": "raid5f", 00:15:37.331 "superblock": true, 00:15:37.331 "num_base_bdevs": 4, 00:15:37.331 "num_base_bdevs_discovered": 1, 00:15:37.331 "num_base_bdevs_operational": 4, 00:15:37.331 "base_bdevs_list": [ 00:15:37.331 { 00:15:37.331 "name": "pt1", 00:15:37.331 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.331 "is_configured": true, 00:15:37.331 "data_offset": 2048, 00:15:37.331 "data_size": 63488 00:15:37.331 }, 00:15:37.331 { 00:15:37.331 "name": null, 00:15:37.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.331 "is_configured": false, 00:15:37.331 "data_offset": 2048, 00:15:37.331 "data_size": 63488 00:15:37.331 }, 00:15:37.331 { 00:15:37.331 "name": null, 00:15:37.331 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.331 "is_configured": false, 00:15:37.331 "data_offset": 2048, 00:15:37.331 "data_size": 63488 00:15:37.331 }, 00:15:37.331 { 00:15:37.331 "name": null, 00:15:37.331 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:37.331 "is_configured": false, 00:15:37.331 "data_offset": 2048, 00:15:37.331 "data_size": 63488 00:15:37.331 } 00:15:37.331 ] 00:15:37.331 }' 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.331 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.591 [2024-11-16 18:56:20.953020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:37.591 [2024-11-16 18:56:20.953082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:37.591 [2024-11-16 18:56:20.953101] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:37.591 [2024-11-16 18:56:20.953112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:37.591 [2024-11-16 18:56:20.953505] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:37.591 [2024-11-16 18:56:20.953530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:37.591 [2024-11-16 18:56:20.953604] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:37.591 [2024-11-16 18:56:20.953624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:37.591 pt2 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.591 [2024-11-16 18:56:20.965015] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.591 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.592 18:56:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.592 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.592 "name": "raid_bdev1", 00:15:37.592 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:37.592 "strip_size_kb": 64, 00:15:37.592 "state": "configuring", 00:15:37.592 "raid_level": "raid5f", 00:15:37.592 "superblock": true, 00:15:37.592 "num_base_bdevs": 4, 00:15:37.592 "num_base_bdevs_discovered": 1, 00:15:37.592 "num_base_bdevs_operational": 4, 00:15:37.592 "base_bdevs_list": [ 00:15:37.592 { 00:15:37.592 "name": "pt1", 00:15:37.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:37.592 "is_configured": true, 00:15:37.592 "data_offset": 2048, 00:15:37.592 "data_size": 63488 00:15:37.592 }, 00:15:37.592 { 00:15:37.592 "name": null, 00:15:37.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:37.592 "is_configured": false, 00:15:37.592 "data_offset": 0, 00:15:37.592 "data_size": 63488 00:15:37.592 }, 00:15:37.592 { 00:15:37.592 "name": null, 00:15:37.592 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:37.592 "is_configured": false, 00:15:37.592 "data_offset": 2048, 00:15:37.592 "data_size": 63488 00:15:37.592 }, 00:15:37.592 { 00:15:37.592 "name": null, 00:15:37.592 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:37.592 "is_configured": false, 00:15:37.592 "data_offset": 2048, 00:15:37.592 "data_size": 63488 00:15:37.592 } 00:15:37.592 ] 00:15:37.592 }' 00:15:37.592 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.592 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.162 [2024-11-16 18:56:21.364310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:38.162 [2024-11-16 18:56:21.364406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.162 [2024-11-16 18:56:21.364440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:38.162 [2024-11-16 18:56:21.364467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.162 [2024-11-16 18:56:21.364905] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.162 [2024-11-16 18:56:21.364958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:38.162 [2024-11-16 18:56:21.365064] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:38.162 [2024-11-16 18:56:21.365110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:38.162 pt2 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.162 [2024-11-16 18:56:21.376276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:38.162 [2024-11-16 18:56:21.376357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.162 [2024-11-16 18:56:21.376387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:38.162 [2024-11-16 18:56:21.376429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.162 [2024-11-16 18:56:21.376799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.162 [2024-11-16 18:56:21.376849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:38.162 [2024-11-16 18:56:21.376932] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:38.162 [2024-11-16 18:56:21.376974] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:38.162 pt3 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.162 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.162 [2024-11-16 18:56:21.388234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:38.162 [2024-11-16 18:56:21.388280] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:38.162 [2024-11-16 18:56:21.388296] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:15:38.162 [2024-11-16 18:56:21.388304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:38.162 [2024-11-16 18:56:21.388665] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:38.163 [2024-11-16 18:56:21.388681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:38.163 [2024-11-16 18:56:21.388734] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:38.163 [2024-11-16 18:56:21.388751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:38.163 [2024-11-16 18:56:21.388899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:38.163 [2024-11-16 18:56:21.388906] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:38.163 [2024-11-16 18:56:21.389121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:38.163 [2024-11-16 18:56:21.395918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:38.163 [2024-11-16 18:56:21.395938] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:38.163 [2024-11-16 18:56:21.396113] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.163 pt4 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.163 "name": "raid_bdev1", 00:15:38.163 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:38.163 "strip_size_kb": 64, 00:15:38.163 "state": "online", 00:15:38.163 "raid_level": "raid5f", 00:15:38.163 "superblock": true, 00:15:38.163 "num_base_bdevs": 4, 00:15:38.163 "num_base_bdevs_discovered": 4, 00:15:38.163 "num_base_bdevs_operational": 4, 00:15:38.163 "base_bdevs_list": [ 00:15:38.163 { 00:15:38.163 "name": "pt1", 00:15:38.163 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.163 "is_configured": true, 00:15:38.163 "data_offset": 2048, 00:15:38.163 "data_size": 63488 00:15:38.163 }, 00:15:38.163 { 00:15:38.163 "name": "pt2", 00:15:38.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.163 "is_configured": true, 00:15:38.163 "data_offset": 2048, 00:15:38.163 "data_size": 63488 00:15:38.163 }, 00:15:38.163 { 00:15:38.163 "name": "pt3", 00:15:38.163 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.163 "is_configured": true, 00:15:38.163 "data_offset": 2048, 00:15:38.163 "data_size": 63488 00:15:38.163 }, 00:15:38.163 { 00:15:38.163 "name": "pt4", 00:15:38.163 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.163 "is_configured": true, 00:15:38.163 "data_offset": 2048, 00:15:38.163 "data_size": 63488 00:15:38.163 } 00:15:38.163 ] 00:15:38.163 }' 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.163 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.423 [2024-11-16 18:56:21.819798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:38.423 "name": "raid_bdev1", 00:15:38.423 "aliases": [ 00:15:38.423 "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e" 00:15:38.423 ], 00:15:38.423 "product_name": "Raid Volume", 00:15:38.423 "block_size": 512, 00:15:38.423 "num_blocks": 190464, 00:15:38.423 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:38.423 "assigned_rate_limits": { 00:15:38.423 "rw_ios_per_sec": 0, 00:15:38.423 "rw_mbytes_per_sec": 0, 00:15:38.423 "r_mbytes_per_sec": 0, 00:15:38.423 "w_mbytes_per_sec": 0 00:15:38.423 }, 00:15:38.423 "claimed": false, 00:15:38.423 "zoned": false, 00:15:38.423 "supported_io_types": { 00:15:38.423 "read": true, 00:15:38.423 "write": true, 00:15:38.423 "unmap": false, 00:15:38.423 "flush": false, 00:15:38.423 "reset": true, 00:15:38.423 "nvme_admin": false, 00:15:38.423 "nvme_io": false, 00:15:38.423 "nvme_io_md": false, 00:15:38.423 "write_zeroes": true, 00:15:38.423 "zcopy": false, 00:15:38.423 "get_zone_info": false, 00:15:38.423 "zone_management": false, 00:15:38.423 "zone_append": false, 00:15:38.423 "compare": false, 00:15:38.423 "compare_and_write": false, 00:15:38.423 "abort": false, 00:15:38.423 "seek_hole": false, 00:15:38.423 "seek_data": false, 00:15:38.423 "copy": false, 00:15:38.423 "nvme_iov_md": false 00:15:38.423 }, 00:15:38.423 "driver_specific": { 00:15:38.423 "raid": { 00:15:38.423 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:38.423 "strip_size_kb": 64, 00:15:38.423 "state": "online", 00:15:38.423 "raid_level": "raid5f", 00:15:38.423 "superblock": true, 00:15:38.423 "num_base_bdevs": 4, 00:15:38.423 "num_base_bdevs_discovered": 4, 00:15:38.423 "num_base_bdevs_operational": 4, 00:15:38.423 "base_bdevs_list": [ 00:15:38.423 { 00:15:38.423 "name": "pt1", 00:15:38.423 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:38.423 "is_configured": true, 00:15:38.423 "data_offset": 2048, 00:15:38.423 "data_size": 63488 00:15:38.423 }, 00:15:38.423 { 00:15:38.423 "name": "pt2", 00:15:38.423 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.423 "is_configured": true, 00:15:38.423 "data_offset": 2048, 00:15:38.423 "data_size": 63488 00:15:38.423 }, 00:15:38.423 { 00:15:38.423 "name": "pt3", 00:15:38.423 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.423 "is_configured": true, 00:15:38.423 "data_offset": 2048, 00:15:38.423 "data_size": 63488 00:15:38.423 }, 00:15:38.423 { 00:15:38.423 "name": "pt4", 00:15:38.423 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.423 "is_configured": true, 00:15:38.423 "data_offset": 2048, 00:15:38.423 "data_size": 63488 00:15:38.423 } 00:15:38.423 ] 00:15:38.423 } 00:15:38.423 } 00:15:38.423 }' 00:15:38.423 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:38.683 pt2 00:15:38.683 pt3 00:15:38.683 pt4' 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.683 18:56:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.683 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.683 [2024-11-16 18:56:22.139208] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9768d46a-162a-4fcc-bec4-fc4e76dd7b7e '!=' 9768d46a-162a-4fcc-bec4-fc4e76dd7b7e ']' 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.943 [2024-11-16 18:56:22.171030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.943 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.944 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.944 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.944 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.944 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.944 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.944 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.944 "name": "raid_bdev1", 00:15:38.944 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:38.944 "strip_size_kb": 64, 00:15:38.944 "state": "online", 00:15:38.944 "raid_level": "raid5f", 00:15:38.944 "superblock": true, 00:15:38.944 "num_base_bdevs": 4, 00:15:38.944 "num_base_bdevs_discovered": 3, 00:15:38.944 "num_base_bdevs_operational": 3, 00:15:38.944 "base_bdevs_list": [ 00:15:38.944 { 00:15:38.944 "name": null, 00:15:38.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.944 "is_configured": false, 00:15:38.944 "data_offset": 0, 00:15:38.944 "data_size": 63488 00:15:38.944 }, 00:15:38.944 { 00:15:38.944 "name": "pt2", 00:15:38.944 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:38.944 "is_configured": true, 00:15:38.944 "data_offset": 2048, 00:15:38.944 "data_size": 63488 00:15:38.944 }, 00:15:38.944 { 00:15:38.944 "name": "pt3", 00:15:38.944 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:38.944 "is_configured": true, 00:15:38.944 "data_offset": 2048, 00:15:38.944 "data_size": 63488 00:15:38.944 }, 00:15:38.944 { 00:15:38.944 "name": "pt4", 00:15:38.944 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:38.944 "is_configured": true, 00:15:38.944 "data_offset": 2048, 00:15:38.944 "data_size": 63488 00:15:38.944 } 00:15:38.944 ] 00:15:38.944 }' 00:15:38.944 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.944 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.203 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:39.203 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.203 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.203 [2024-11-16 18:56:22.610233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:39.203 [2024-11-16 18:56:22.610298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:39.203 [2024-11-16 18:56:22.610399] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:39.203 [2024-11-16 18:56:22.610486] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:39.203 [2024-11-16 18:56:22.610518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:39.203 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.203 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.204 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:39.463 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.464 [2024-11-16 18:56:22.706062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:39.464 [2024-11-16 18:56:22.706108] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.464 [2024-11-16 18:56:22.706139] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:39.464 [2024-11-16 18:56:22.706147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.464 [2024-11-16 18:56:22.708258] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.464 [2024-11-16 18:56:22.708295] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:39.464 [2024-11-16 18:56:22.708373] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:39.464 [2024-11-16 18:56:22.708413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:39.464 pt2 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.464 "name": "raid_bdev1", 00:15:39.464 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:39.464 "strip_size_kb": 64, 00:15:39.464 "state": "configuring", 00:15:39.464 "raid_level": "raid5f", 00:15:39.464 "superblock": true, 00:15:39.464 "num_base_bdevs": 4, 00:15:39.464 "num_base_bdevs_discovered": 1, 00:15:39.464 "num_base_bdevs_operational": 3, 00:15:39.464 "base_bdevs_list": [ 00:15:39.464 { 00:15:39.464 "name": null, 00:15:39.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.464 "is_configured": false, 00:15:39.464 "data_offset": 2048, 00:15:39.464 "data_size": 63488 00:15:39.464 }, 00:15:39.464 { 00:15:39.464 "name": "pt2", 00:15:39.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.464 "is_configured": true, 00:15:39.464 "data_offset": 2048, 00:15:39.464 "data_size": 63488 00:15:39.464 }, 00:15:39.464 { 00:15:39.464 "name": null, 00:15:39.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.464 "is_configured": false, 00:15:39.464 "data_offset": 2048, 00:15:39.464 "data_size": 63488 00:15:39.464 }, 00:15:39.464 { 00:15:39.464 "name": null, 00:15:39.464 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:39.464 "is_configured": false, 00:15:39.464 "data_offset": 2048, 00:15:39.464 "data_size": 63488 00:15:39.464 } 00:15:39.464 ] 00:15:39.464 }' 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.464 18:56:22 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.724 [2024-11-16 18:56:23.129400] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:39.724 [2024-11-16 18:56:23.129486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.724 [2024-11-16 18:56:23.129522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:15:39.724 [2024-11-16 18:56:23.129548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.724 [2024-11-16 18:56:23.129946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.724 [2024-11-16 18:56:23.129999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:39.724 [2024-11-16 18:56:23.130092] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:39.724 [2024-11-16 18:56:23.130143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:39.724 pt3 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.724 "name": "raid_bdev1", 00:15:39.724 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:39.724 "strip_size_kb": 64, 00:15:39.724 "state": "configuring", 00:15:39.724 "raid_level": "raid5f", 00:15:39.724 "superblock": true, 00:15:39.724 "num_base_bdevs": 4, 00:15:39.724 "num_base_bdevs_discovered": 2, 00:15:39.724 "num_base_bdevs_operational": 3, 00:15:39.724 "base_bdevs_list": [ 00:15:39.724 { 00:15:39.724 "name": null, 00:15:39.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.724 "is_configured": false, 00:15:39.724 "data_offset": 2048, 00:15:39.724 "data_size": 63488 00:15:39.724 }, 00:15:39.724 { 00:15:39.724 "name": "pt2", 00:15:39.724 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:39.724 "is_configured": true, 00:15:39.724 "data_offset": 2048, 00:15:39.724 "data_size": 63488 00:15:39.724 }, 00:15:39.724 { 00:15:39.724 "name": "pt3", 00:15:39.724 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:39.724 "is_configured": true, 00:15:39.724 "data_offset": 2048, 00:15:39.724 "data_size": 63488 00:15:39.724 }, 00:15:39.724 { 00:15:39.724 "name": null, 00:15:39.724 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:39.724 "is_configured": false, 00:15:39.724 "data_offset": 2048, 00:15:39.724 "data_size": 63488 00:15:39.724 } 00:15:39.724 ] 00:15:39.724 }' 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.724 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.294 [2024-11-16 18:56:23.580646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:40.294 [2024-11-16 18:56:23.580716] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.294 [2024-11-16 18:56:23.580739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:40.294 [2024-11-16 18:56:23.580748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.294 [2024-11-16 18:56:23.581163] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.294 [2024-11-16 18:56:23.581179] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:40.294 [2024-11-16 18:56:23.581257] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:40.294 [2024-11-16 18:56:23.581278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:40.294 [2024-11-16 18:56:23.581405] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:40.294 [2024-11-16 18:56:23.581413] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:40.294 [2024-11-16 18:56:23.581640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:40.294 pt4 00:15:40.294 [2024-11-16 18:56:23.588335] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:40.294 [2024-11-16 18:56:23.588359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:40.294 [2024-11-16 18:56:23.588634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.294 "name": "raid_bdev1", 00:15:40.294 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:40.294 "strip_size_kb": 64, 00:15:40.294 "state": "online", 00:15:40.294 "raid_level": "raid5f", 00:15:40.294 "superblock": true, 00:15:40.294 "num_base_bdevs": 4, 00:15:40.294 "num_base_bdevs_discovered": 3, 00:15:40.294 "num_base_bdevs_operational": 3, 00:15:40.294 "base_bdevs_list": [ 00:15:40.294 { 00:15:40.294 "name": null, 00:15:40.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.294 "is_configured": false, 00:15:40.294 "data_offset": 2048, 00:15:40.294 "data_size": 63488 00:15:40.294 }, 00:15:40.294 { 00:15:40.294 "name": "pt2", 00:15:40.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.294 "is_configured": true, 00:15:40.294 "data_offset": 2048, 00:15:40.294 "data_size": 63488 00:15:40.294 }, 00:15:40.294 { 00:15:40.294 "name": "pt3", 00:15:40.294 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.294 "is_configured": true, 00:15:40.294 "data_offset": 2048, 00:15:40.294 "data_size": 63488 00:15:40.294 }, 00:15:40.294 { 00:15:40.294 "name": "pt4", 00:15:40.294 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.294 "is_configured": true, 00:15:40.294 "data_offset": 2048, 00:15:40.294 "data_size": 63488 00:15:40.294 } 00:15:40.294 ] 00:15:40.294 }' 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.294 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.554 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:40.554 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.554 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.554 [2024-11-16 18:56:23.940270] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.554 [2024-11-16 18:56:23.940334] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:40.554 [2024-11-16 18:56:23.940414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:40.554 [2024-11-16 18:56:23.940512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:40.555 [2024-11-16 18:56:23.940564] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.555 [2024-11-16 18:56:23.996178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:40.555 [2024-11-16 18:56:23.996267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:40.555 [2024-11-16 18:56:23.996306] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:15:40.555 [2024-11-16 18:56:23.996335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:40.555 [2024-11-16 18:56:23.998486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:40.555 [2024-11-16 18:56:23.998555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:40.555 [2024-11-16 18:56:23.998672] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:40.555 [2024-11-16 18:56:23.998751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:40.555 [2024-11-16 18:56:23.998940] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:40.555 [2024-11-16 18:56:23.998955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:40.555 [2024-11-16 18:56:23.998969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:40.555 [2024-11-16 18:56:23.999029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:40.555 [2024-11-16 18:56:23.999131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:40.555 pt1 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.555 18:56:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.555 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.815 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.815 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.815 "name": "raid_bdev1", 00:15:40.815 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:40.815 "strip_size_kb": 64, 00:15:40.815 "state": "configuring", 00:15:40.815 "raid_level": "raid5f", 00:15:40.815 "superblock": true, 00:15:40.815 "num_base_bdevs": 4, 00:15:40.815 "num_base_bdevs_discovered": 2, 00:15:40.815 "num_base_bdevs_operational": 3, 00:15:40.815 "base_bdevs_list": [ 00:15:40.815 { 00:15:40.815 "name": null, 00:15:40.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.815 "is_configured": false, 00:15:40.815 "data_offset": 2048, 00:15:40.815 "data_size": 63488 00:15:40.815 }, 00:15:40.815 { 00:15:40.815 "name": "pt2", 00:15:40.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:40.815 "is_configured": true, 00:15:40.815 "data_offset": 2048, 00:15:40.815 "data_size": 63488 00:15:40.815 }, 00:15:40.815 { 00:15:40.815 "name": "pt3", 00:15:40.815 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:40.815 "is_configured": true, 00:15:40.815 "data_offset": 2048, 00:15:40.815 "data_size": 63488 00:15:40.815 }, 00:15:40.815 { 00:15:40.815 "name": null, 00:15:40.815 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:40.815 "is_configured": false, 00:15:40.815 "data_offset": 2048, 00:15:40.815 "data_size": 63488 00:15:40.815 } 00:15:40.815 ] 00:15:40.815 }' 00:15:40.815 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.815 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.083 [2024-11-16 18:56:24.467552] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:41.083 [2024-11-16 18:56:24.467609] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:41.083 [2024-11-16 18:56:24.467650] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:41.083 [2024-11-16 18:56:24.467659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:41.083 [2024-11-16 18:56:24.468131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:41.083 [2024-11-16 18:56:24.468210] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:41.083 [2024-11-16 18:56:24.468306] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:41.083 [2024-11-16 18:56:24.468341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:41.083 [2024-11-16 18:56:24.468496] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:41.083 [2024-11-16 18:56:24.468506] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:15:41.083 [2024-11-16 18:56:24.468788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:41.083 [2024-11-16 18:56:24.475977] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:41.083 [2024-11-16 18:56:24.476009] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:41.083 [2024-11-16 18:56:24.476265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.083 pt4 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.083 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.083 "name": "raid_bdev1", 00:15:41.083 "uuid": "9768d46a-162a-4fcc-bec4-fc4e76dd7b7e", 00:15:41.083 "strip_size_kb": 64, 00:15:41.083 "state": "online", 00:15:41.083 "raid_level": "raid5f", 00:15:41.083 "superblock": true, 00:15:41.083 "num_base_bdevs": 4, 00:15:41.083 "num_base_bdevs_discovered": 3, 00:15:41.083 "num_base_bdevs_operational": 3, 00:15:41.084 "base_bdevs_list": [ 00:15:41.084 { 00:15:41.084 "name": null, 00:15:41.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.084 "is_configured": false, 00:15:41.084 "data_offset": 2048, 00:15:41.084 "data_size": 63488 00:15:41.084 }, 00:15:41.084 { 00:15:41.084 "name": "pt2", 00:15:41.084 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:41.084 "is_configured": true, 00:15:41.084 "data_offset": 2048, 00:15:41.084 "data_size": 63488 00:15:41.084 }, 00:15:41.084 { 00:15:41.084 "name": "pt3", 00:15:41.084 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:41.084 "is_configured": true, 00:15:41.084 "data_offset": 2048, 00:15:41.084 "data_size": 63488 00:15:41.084 }, 00:15:41.084 { 00:15:41.084 "name": "pt4", 00:15:41.084 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:41.084 "is_configured": true, 00:15:41.084 "data_offset": 2048, 00:15:41.084 "data_size": 63488 00:15:41.084 } 00:15:41.084 ] 00:15:41.084 }' 00:15:41.084 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.084 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.666 [2024-11-16 18:56:24.952181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9768d46a-162a-4fcc-bec4-fc4e76dd7b7e '!=' 9768d46a-162a-4fcc-bec4-fc4e76dd7b7e ']' 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83770 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83770 ']' 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83770 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.666 18:56:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83770 00:15:41.666 18:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.666 killing process with pid 83770 00:15:41.666 18:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.666 18:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83770' 00:15:41.666 18:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83770 00:15:41.666 [2024-11-16 18:56:25.034254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:41.666 [2024-11-16 18:56:25.034327] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:41.666 [2024-11-16 18:56:25.034396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:41.666 [2024-11-16 18:56:25.034408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:41.666 18:56:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83770 00:15:42.237 [2024-11-16 18:56:25.398869] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:43.177 18:56:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:43.177 00:15:43.177 real 0m7.967s 00:15:43.177 user 0m12.499s 00:15:43.177 sys 0m1.422s 00:15:43.177 18:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.177 ************************************ 00:15:43.177 END TEST raid5f_superblock_test 00:15:43.177 ************************************ 00:15:43.177 18:56:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.177 18:56:26 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:43.177 18:56:26 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:15:43.177 18:56:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:43.177 18:56:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.177 18:56:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:43.177 ************************************ 00:15:43.177 START TEST raid5f_rebuild_test 00:15:43.177 ************************************ 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84251 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84251 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84251 ']' 00:15:43.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:43.177 18:56:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.177 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:43.177 Zero copy mechanism will not be used. 00:15:43.177 [2024-11-16 18:56:26.605635] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:15:43.177 [2024-11-16 18:56:26.605768] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84251 ] 00:15:43.437 [2024-11-16 18:56:26.777058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.437 [2024-11-16 18:56:26.880080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.695 [2024-11-16 18:56:27.047370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.695 [2024-11-16 18:56:27.047428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:43.955 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:43.955 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:43.955 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:43.955 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:43.955 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.955 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 BaseBdev1_malloc 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 [2024-11-16 18:56:27.443861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:44.215 [2024-11-16 18:56:27.443922] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.215 [2024-11-16 18:56:27.443961] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:44.215 [2024-11-16 18:56:27.443971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.215 [2024-11-16 18:56:27.446007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.215 [2024-11-16 18:56:27.446108] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:44.215 BaseBdev1 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 BaseBdev2_malloc 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 [2024-11-16 18:56:27.498294] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:44.215 [2024-11-16 18:56:27.498354] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.215 [2024-11-16 18:56:27.498371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:44.215 [2024-11-16 18:56:27.498381] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.215 [2024-11-16 18:56:27.500336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.215 [2024-11-16 18:56:27.500417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:44.215 BaseBdev2 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 BaseBdev3_malloc 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 [2024-11-16 18:56:27.582002] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:44.215 [2024-11-16 18:56:27.582050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.215 [2024-11-16 18:56:27.582086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:44.215 [2024-11-16 18:56:27.582096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.215 [2024-11-16 18:56:27.584009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.215 [2024-11-16 18:56:27.584098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:44.215 BaseBdev3 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 BaseBdev4_malloc 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 [2024-11-16 18:56:27.634687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:44.215 [2024-11-16 18:56:27.634769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.215 [2024-11-16 18:56:27.634792] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:44.215 [2024-11-16 18:56:27.634802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.215 [2024-11-16 18:56:27.636774] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.215 [2024-11-16 18:56:27.636813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:44.215 BaseBdev4 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.215 spare_malloc 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.215 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.475 spare_delay 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.475 [2024-11-16 18:56:27.701386] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:44.475 [2024-11-16 18:56:27.701442] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:44.475 [2024-11-16 18:56:27.701476] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:44.475 [2024-11-16 18:56:27.701486] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:44.475 [2024-11-16 18:56:27.703527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:44.475 [2024-11-16 18:56:27.703566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:44.475 spare 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.475 [2024-11-16 18:56:27.713418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.475 [2024-11-16 18:56:27.715211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.475 [2024-11-16 18:56:27.715309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:44.475 [2024-11-16 18:56:27.715394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:44.475 [2024-11-16 18:56:27.715513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:44.475 [2024-11-16 18:56:27.715553] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:15:44.475 [2024-11-16 18:56:27.715837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:44.475 [2024-11-16 18:56:27.723074] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:44.475 [2024-11-16 18:56:27.723125] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:44.475 [2024-11-16 18:56:27.723344] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.475 "name": "raid_bdev1", 00:15:44.475 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:44.475 "strip_size_kb": 64, 00:15:44.475 "state": "online", 00:15:44.475 "raid_level": "raid5f", 00:15:44.475 "superblock": false, 00:15:44.475 "num_base_bdevs": 4, 00:15:44.475 "num_base_bdevs_discovered": 4, 00:15:44.475 "num_base_bdevs_operational": 4, 00:15:44.475 "base_bdevs_list": [ 00:15:44.475 { 00:15:44.475 "name": "BaseBdev1", 00:15:44.475 "uuid": "a2fd0f3c-2d37-5b6d-9bf3-7cb90902283c", 00:15:44.475 "is_configured": true, 00:15:44.475 "data_offset": 0, 00:15:44.475 "data_size": 65536 00:15:44.475 }, 00:15:44.475 { 00:15:44.475 "name": "BaseBdev2", 00:15:44.475 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:44.475 "is_configured": true, 00:15:44.475 "data_offset": 0, 00:15:44.475 "data_size": 65536 00:15:44.475 }, 00:15:44.475 { 00:15:44.475 "name": "BaseBdev3", 00:15:44.475 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:44.475 "is_configured": true, 00:15:44.475 "data_offset": 0, 00:15:44.475 "data_size": 65536 00:15:44.475 }, 00:15:44.475 { 00:15:44.475 "name": "BaseBdev4", 00:15:44.475 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:44.475 "is_configured": true, 00:15:44.475 "data_offset": 0, 00:15:44.475 "data_size": 65536 00:15:44.475 } 00:15:44.475 ] 00:15:44.475 }' 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.475 18:56:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.735 [2024-11-16 18:56:28.119046] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.735 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:44.995 [2024-11-16 18:56:28.386465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:44.995 /dev/nbd0 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.995 1+0 records in 00:15:44.995 1+0 records out 00:15:44.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353801 s, 11.6 MB/s 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:15:44.995 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:15:45.565 512+0 records in 00:15:45.565 512+0 records out 00:15:45.565 100663296 bytes (101 MB, 96 MiB) copied, 0.456749 s, 220 MB/s 00:15:45.565 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:45.565 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:45.565 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:45.565 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:45.565 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:45.565 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:45.565 18:56:28 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:45.824 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:45.824 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:45.824 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:45.824 [2024-11-16 18:56:29.120833] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.824 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.824 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.824 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:45.824 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:45.824 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.825 [2024-11-16 18:56:29.130702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.825 "name": "raid_bdev1", 00:15:45.825 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:45.825 "strip_size_kb": 64, 00:15:45.825 "state": "online", 00:15:45.825 "raid_level": "raid5f", 00:15:45.825 "superblock": false, 00:15:45.825 "num_base_bdevs": 4, 00:15:45.825 "num_base_bdevs_discovered": 3, 00:15:45.825 "num_base_bdevs_operational": 3, 00:15:45.825 "base_bdevs_list": [ 00:15:45.825 { 00:15:45.825 "name": null, 00:15:45.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.825 "is_configured": false, 00:15:45.825 "data_offset": 0, 00:15:45.825 "data_size": 65536 00:15:45.825 }, 00:15:45.825 { 00:15:45.825 "name": "BaseBdev2", 00:15:45.825 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:45.825 "is_configured": true, 00:15:45.825 "data_offset": 0, 00:15:45.825 "data_size": 65536 00:15:45.825 }, 00:15:45.825 { 00:15:45.825 "name": "BaseBdev3", 00:15:45.825 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:45.825 "is_configured": true, 00:15:45.825 "data_offset": 0, 00:15:45.825 "data_size": 65536 00:15:45.825 }, 00:15:45.825 { 00:15:45.825 "name": "BaseBdev4", 00:15:45.825 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:45.825 "is_configured": true, 00:15:45.825 "data_offset": 0, 00:15:45.825 "data_size": 65536 00:15:45.825 } 00:15:45.825 ] 00:15:45.825 }' 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.825 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.085 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:46.085 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.085 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.085 [2024-11-16 18:56:29.521973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:46.085 [2024-11-16 18:56:29.537724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:46.085 18:56:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.085 18:56:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:46.085 [2024-11-16 18:56:29.546831] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.467 "name": "raid_bdev1", 00:15:47.467 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:47.467 "strip_size_kb": 64, 00:15:47.467 "state": "online", 00:15:47.467 "raid_level": "raid5f", 00:15:47.467 "superblock": false, 00:15:47.467 "num_base_bdevs": 4, 00:15:47.467 "num_base_bdevs_discovered": 4, 00:15:47.467 "num_base_bdevs_operational": 4, 00:15:47.467 "process": { 00:15:47.467 "type": "rebuild", 00:15:47.467 "target": "spare", 00:15:47.467 "progress": { 00:15:47.467 "blocks": 19200, 00:15:47.467 "percent": 9 00:15:47.467 } 00:15:47.467 }, 00:15:47.467 "base_bdevs_list": [ 00:15:47.467 { 00:15:47.467 "name": "spare", 00:15:47.467 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:47.467 "is_configured": true, 00:15:47.467 "data_offset": 0, 00:15:47.467 "data_size": 65536 00:15:47.467 }, 00:15:47.467 { 00:15:47.467 "name": "BaseBdev2", 00:15:47.467 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:47.467 "is_configured": true, 00:15:47.467 "data_offset": 0, 00:15:47.467 "data_size": 65536 00:15:47.467 }, 00:15:47.467 { 00:15:47.467 "name": "BaseBdev3", 00:15:47.467 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:47.467 "is_configured": true, 00:15:47.467 "data_offset": 0, 00:15:47.467 "data_size": 65536 00:15:47.467 }, 00:15:47.467 { 00:15:47.467 "name": "BaseBdev4", 00:15:47.467 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:47.467 "is_configured": true, 00:15:47.467 "data_offset": 0, 00:15:47.467 "data_size": 65536 00:15:47.467 } 00:15:47.467 ] 00:15:47.467 }' 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.467 [2024-11-16 18:56:30.677535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.467 [2024-11-16 18:56:30.752498] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:47.467 [2024-11-16 18:56:30.752557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:47.467 [2024-11-16 18:56:30.752573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:47.467 [2024-11-16 18:56:30.752582] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.467 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.467 "name": "raid_bdev1", 00:15:47.467 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:47.467 "strip_size_kb": 64, 00:15:47.467 "state": "online", 00:15:47.467 "raid_level": "raid5f", 00:15:47.467 "superblock": false, 00:15:47.467 "num_base_bdevs": 4, 00:15:47.467 "num_base_bdevs_discovered": 3, 00:15:47.467 "num_base_bdevs_operational": 3, 00:15:47.467 "base_bdevs_list": [ 00:15:47.467 { 00:15:47.467 "name": null, 00:15:47.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.468 "is_configured": false, 00:15:47.468 "data_offset": 0, 00:15:47.468 "data_size": 65536 00:15:47.468 }, 00:15:47.468 { 00:15:47.468 "name": "BaseBdev2", 00:15:47.468 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:47.468 "is_configured": true, 00:15:47.468 "data_offset": 0, 00:15:47.468 "data_size": 65536 00:15:47.468 }, 00:15:47.468 { 00:15:47.468 "name": "BaseBdev3", 00:15:47.468 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:47.468 "is_configured": true, 00:15:47.468 "data_offset": 0, 00:15:47.468 "data_size": 65536 00:15:47.468 }, 00:15:47.468 { 00:15:47.468 "name": "BaseBdev4", 00:15:47.468 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:47.468 "is_configured": true, 00:15:47.468 "data_offset": 0, 00:15:47.468 "data_size": 65536 00:15:47.468 } 00:15:47.468 ] 00:15:47.468 }' 00:15:47.468 18:56:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.468 18:56:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.037 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:48.037 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.037 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:48.037 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.038 "name": "raid_bdev1", 00:15:48.038 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:48.038 "strip_size_kb": 64, 00:15:48.038 "state": "online", 00:15:48.038 "raid_level": "raid5f", 00:15:48.038 "superblock": false, 00:15:48.038 "num_base_bdevs": 4, 00:15:48.038 "num_base_bdevs_discovered": 3, 00:15:48.038 "num_base_bdevs_operational": 3, 00:15:48.038 "base_bdevs_list": [ 00:15:48.038 { 00:15:48.038 "name": null, 00:15:48.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:48.038 "is_configured": false, 00:15:48.038 "data_offset": 0, 00:15:48.038 "data_size": 65536 00:15:48.038 }, 00:15:48.038 { 00:15:48.038 "name": "BaseBdev2", 00:15:48.038 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:48.038 "is_configured": true, 00:15:48.038 "data_offset": 0, 00:15:48.038 "data_size": 65536 00:15:48.038 }, 00:15:48.038 { 00:15:48.038 "name": "BaseBdev3", 00:15:48.038 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:48.038 "is_configured": true, 00:15:48.038 "data_offset": 0, 00:15:48.038 "data_size": 65536 00:15:48.038 }, 00:15:48.038 { 00:15:48.038 "name": "BaseBdev4", 00:15:48.038 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:48.038 "is_configured": true, 00:15:48.038 "data_offset": 0, 00:15:48.038 "data_size": 65536 00:15:48.038 } 00:15:48.038 ] 00:15:48.038 }' 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.038 [2024-11-16 18:56:31.352341] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:48.038 [2024-11-16 18:56:31.366973] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.038 18:56:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:48.038 [2024-11-16 18:56:31.375460] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:48.977 "name": "raid_bdev1", 00:15:48.977 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:48.977 "strip_size_kb": 64, 00:15:48.977 "state": "online", 00:15:48.977 "raid_level": "raid5f", 00:15:48.977 "superblock": false, 00:15:48.977 "num_base_bdevs": 4, 00:15:48.977 "num_base_bdevs_discovered": 4, 00:15:48.977 "num_base_bdevs_operational": 4, 00:15:48.977 "process": { 00:15:48.977 "type": "rebuild", 00:15:48.977 "target": "spare", 00:15:48.977 "progress": { 00:15:48.977 "blocks": 19200, 00:15:48.977 "percent": 9 00:15:48.977 } 00:15:48.977 }, 00:15:48.977 "base_bdevs_list": [ 00:15:48.977 { 00:15:48.977 "name": "spare", 00:15:48.977 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:48.977 "is_configured": true, 00:15:48.977 "data_offset": 0, 00:15:48.977 "data_size": 65536 00:15:48.977 }, 00:15:48.977 { 00:15:48.977 "name": "BaseBdev2", 00:15:48.977 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:48.977 "is_configured": true, 00:15:48.977 "data_offset": 0, 00:15:48.977 "data_size": 65536 00:15:48.977 }, 00:15:48.977 { 00:15:48.977 "name": "BaseBdev3", 00:15:48.977 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:48.977 "is_configured": true, 00:15:48.977 "data_offset": 0, 00:15:48.977 "data_size": 65536 00:15:48.977 }, 00:15:48.977 { 00:15:48.977 "name": "BaseBdev4", 00:15:48.977 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:48.977 "is_configured": true, 00:15:48.977 "data_offset": 0, 00:15:48.977 "data_size": 65536 00:15:48.977 } 00:15:48.977 ] 00:15:48.977 }' 00:15:48.977 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=594 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.237 "name": "raid_bdev1", 00:15:49.237 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:49.237 "strip_size_kb": 64, 00:15:49.237 "state": "online", 00:15:49.237 "raid_level": "raid5f", 00:15:49.237 "superblock": false, 00:15:49.237 "num_base_bdevs": 4, 00:15:49.237 "num_base_bdevs_discovered": 4, 00:15:49.237 "num_base_bdevs_operational": 4, 00:15:49.237 "process": { 00:15:49.237 "type": "rebuild", 00:15:49.237 "target": "spare", 00:15:49.237 "progress": { 00:15:49.237 "blocks": 21120, 00:15:49.237 "percent": 10 00:15:49.237 } 00:15:49.237 }, 00:15:49.237 "base_bdevs_list": [ 00:15:49.237 { 00:15:49.237 "name": "spare", 00:15:49.237 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:49.237 "is_configured": true, 00:15:49.237 "data_offset": 0, 00:15:49.237 "data_size": 65536 00:15:49.237 }, 00:15:49.237 { 00:15:49.237 "name": "BaseBdev2", 00:15:49.237 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:49.237 "is_configured": true, 00:15:49.237 "data_offset": 0, 00:15:49.237 "data_size": 65536 00:15:49.237 }, 00:15:49.237 { 00:15:49.237 "name": "BaseBdev3", 00:15:49.237 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:49.237 "is_configured": true, 00:15:49.237 "data_offset": 0, 00:15:49.237 "data_size": 65536 00:15:49.237 }, 00:15:49.237 { 00:15:49.237 "name": "BaseBdev4", 00:15:49.237 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:49.237 "is_configured": true, 00:15:49.237 "data_offset": 0, 00:15:49.237 "data_size": 65536 00:15:49.237 } 00:15:49.237 ] 00:15:49.237 }' 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.237 18:56:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.177 18:56:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.437 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.437 "name": "raid_bdev1", 00:15:50.437 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:50.437 "strip_size_kb": 64, 00:15:50.437 "state": "online", 00:15:50.437 "raid_level": "raid5f", 00:15:50.437 "superblock": false, 00:15:50.437 "num_base_bdevs": 4, 00:15:50.437 "num_base_bdevs_discovered": 4, 00:15:50.437 "num_base_bdevs_operational": 4, 00:15:50.437 "process": { 00:15:50.437 "type": "rebuild", 00:15:50.437 "target": "spare", 00:15:50.437 "progress": { 00:15:50.437 "blocks": 42240, 00:15:50.437 "percent": 21 00:15:50.437 } 00:15:50.437 }, 00:15:50.437 "base_bdevs_list": [ 00:15:50.437 { 00:15:50.437 "name": "spare", 00:15:50.437 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:50.437 "is_configured": true, 00:15:50.437 "data_offset": 0, 00:15:50.437 "data_size": 65536 00:15:50.437 }, 00:15:50.437 { 00:15:50.437 "name": "BaseBdev2", 00:15:50.437 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:50.437 "is_configured": true, 00:15:50.437 "data_offset": 0, 00:15:50.437 "data_size": 65536 00:15:50.437 }, 00:15:50.437 { 00:15:50.437 "name": "BaseBdev3", 00:15:50.437 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:50.437 "is_configured": true, 00:15:50.437 "data_offset": 0, 00:15:50.437 "data_size": 65536 00:15:50.437 }, 00:15:50.437 { 00:15:50.437 "name": "BaseBdev4", 00:15:50.437 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:50.437 "is_configured": true, 00:15:50.437 "data_offset": 0, 00:15:50.437 "data_size": 65536 00:15:50.437 } 00:15:50.437 ] 00:15:50.437 }' 00:15:50.437 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.437 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:50.437 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.437 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.437 18:56:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:51.377 "name": "raid_bdev1", 00:15:51.377 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:51.377 "strip_size_kb": 64, 00:15:51.377 "state": "online", 00:15:51.377 "raid_level": "raid5f", 00:15:51.377 "superblock": false, 00:15:51.377 "num_base_bdevs": 4, 00:15:51.377 "num_base_bdevs_discovered": 4, 00:15:51.377 "num_base_bdevs_operational": 4, 00:15:51.377 "process": { 00:15:51.377 "type": "rebuild", 00:15:51.377 "target": "spare", 00:15:51.377 "progress": { 00:15:51.377 "blocks": 63360, 00:15:51.377 "percent": 32 00:15:51.377 } 00:15:51.377 }, 00:15:51.377 "base_bdevs_list": [ 00:15:51.377 { 00:15:51.377 "name": "spare", 00:15:51.377 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:51.377 "is_configured": true, 00:15:51.377 "data_offset": 0, 00:15:51.377 "data_size": 65536 00:15:51.377 }, 00:15:51.377 { 00:15:51.377 "name": "BaseBdev2", 00:15:51.377 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:51.377 "is_configured": true, 00:15:51.377 "data_offset": 0, 00:15:51.377 "data_size": 65536 00:15:51.377 }, 00:15:51.377 { 00:15:51.377 "name": "BaseBdev3", 00:15:51.377 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:51.377 "is_configured": true, 00:15:51.377 "data_offset": 0, 00:15:51.377 "data_size": 65536 00:15:51.377 }, 00:15:51.377 { 00:15:51.377 "name": "BaseBdev4", 00:15:51.377 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:51.377 "is_configured": true, 00:15:51.377 "data_offset": 0, 00:15:51.377 "data_size": 65536 00:15:51.377 } 00:15:51.377 ] 00:15:51.377 }' 00:15:51.377 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:51.636 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:51.636 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:51.636 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:51.636 18:56:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:52.575 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:52.575 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:52.575 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.575 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:52.575 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:52.575 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.575 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.576 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.576 18:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.576 18:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.576 18:56:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.576 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.576 "name": "raid_bdev1", 00:15:52.576 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:52.576 "strip_size_kb": 64, 00:15:52.576 "state": "online", 00:15:52.576 "raid_level": "raid5f", 00:15:52.576 "superblock": false, 00:15:52.576 "num_base_bdevs": 4, 00:15:52.576 "num_base_bdevs_discovered": 4, 00:15:52.576 "num_base_bdevs_operational": 4, 00:15:52.576 "process": { 00:15:52.576 "type": "rebuild", 00:15:52.576 "target": "spare", 00:15:52.576 "progress": { 00:15:52.576 "blocks": 86400, 00:15:52.576 "percent": 43 00:15:52.576 } 00:15:52.576 }, 00:15:52.576 "base_bdevs_list": [ 00:15:52.576 { 00:15:52.576 "name": "spare", 00:15:52.576 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:52.576 "is_configured": true, 00:15:52.576 "data_offset": 0, 00:15:52.576 "data_size": 65536 00:15:52.576 }, 00:15:52.576 { 00:15:52.576 "name": "BaseBdev2", 00:15:52.576 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:52.576 "is_configured": true, 00:15:52.576 "data_offset": 0, 00:15:52.576 "data_size": 65536 00:15:52.576 }, 00:15:52.576 { 00:15:52.576 "name": "BaseBdev3", 00:15:52.576 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:52.576 "is_configured": true, 00:15:52.576 "data_offset": 0, 00:15:52.576 "data_size": 65536 00:15:52.576 }, 00:15:52.576 { 00:15:52.576 "name": "BaseBdev4", 00:15:52.576 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:52.576 "is_configured": true, 00:15:52.576 "data_offset": 0, 00:15:52.576 "data_size": 65536 00:15:52.576 } 00:15:52.576 ] 00:15:52.576 }' 00:15:52.576 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.576 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:52.576 18:56:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.576 18:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.576 18:56:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:53.958 "name": "raid_bdev1", 00:15:53.958 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:53.958 "strip_size_kb": 64, 00:15:53.958 "state": "online", 00:15:53.958 "raid_level": "raid5f", 00:15:53.958 "superblock": false, 00:15:53.958 "num_base_bdevs": 4, 00:15:53.958 "num_base_bdevs_discovered": 4, 00:15:53.958 "num_base_bdevs_operational": 4, 00:15:53.958 "process": { 00:15:53.958 "type": "rebuild", 00:15:53.958 "target": "spare", 00:15:53.958 "progress": { 00:15:53.958 "blocks": 107520, 00:15:53.958 "percent": 54 00:15:53.958 } 00:15:53.958 }, 00:15:53.958 "base_bdevs_list": [ 00:15:53.958 { 00:15:53.958 "name": "spare", 00:15:53.958 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:53.958 "is_configured": true, 00:15:53.958 "data_offset": 0, 00:15:53.958 "data_size": 65536 00:15:53.958 }, 00:15:53.958 { 00:15:53.958 "name": "BaseBdev2", 00:15:53.958 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:53.958 "is_configured": true, 00:15:53.958 "data_offset": 0, 00:15:53.958 "data_size": 65536 00:15:53.958 }, 00:15:53.958 { 00:15:53.958 "name": "BaseBdev3", 00:15:53.958 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:53.958 "is_configured": true, 00:15:53.958 "data_offset": 0, 00:15:53.958 "data_size": 65536 00:15:53.958 }, 00:15:53.958 { 00:15:53.958 "name": "BaseBdev4", 00:15:53.958 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:53.958 "is_configured": true, 00:15:53.958 "data_offset": 0, 00:15:53.958 "data_size": 65536 00:15:53.958 } 00:15:53.958 ] 00:15:53.958 }' 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:53.958 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:53.959 18:56:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.897 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.897 "name": "raid_bdev1", 00:15:54.897 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:54.897 "strip_size_kb": 64, 00:15:54.897 "state": "online", 00:15:54.897 "raid_level": "raid5f", 00:15:54.897 "superblock": false, 00:15:54.897 "num_base_bdevs": 4, 00:15:54.897 "num_base_bdevs_discovered": 4, 00:15:54.897 "num_base_bdevs_operational": 4, 00:15:54.897 "process": { 00:15:54.897 "type": "rebuild", 00:15:54.897 "target": "spare", 00:15:54.897 "progress": { 00:15:54.897 "blocks": 128640, 00:15:54.897 "percent": 65 00:15:54.897 } 00:15:54.897 }, 00:15:54.897 "base_bdevs_list": [ 00:15:54.897 { 00:15:54.897 "name": "spare", 00:15:54.898 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:54.898 "is_configured": true, 00:15:54.898 "data_offset": 0, 00:15:54.898 "data_size": 65536 00:15:54.898 }, 00:15:54.898 { 00:15:54.898 "name": "BaseBdev2", 00:15:54.898 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:54.898 "is_configured": true, 00:15:54.898 "data_offset": 0, 00:15:54.898 "data_size": 65536 00:15:54.898 }, 00:15:54.898 { 00:15:54.898 "name": "BaseBdev3", 00:15:54.898 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:54.898 "is_configured": true, 00:15:54.898 "data_offset": 0, 00:15:54.898 "data_size": 65536 00:15:54.898 }, 00:15:54.898 { 00:15:54.898 "name": "BaseBdev4", 00:15:54.898 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:54.898 "is_configured": true, 00:15:54.898 "data_offset": 0, 00:15:54.898 "data_size": 65536 00:15:54.898 } 00:15:54.898 ] 00:15:54.898 }' 00:15:54.898 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.898 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.898 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.898 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.898 18:56:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.286 "name": "raid_bdev1", 00:15:56.286 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:56.286 "strip_size_kb": 64, 00:15:56.286 "state": "online", 00:15:56.286 "raid_level": "raid5f", 00:15:56.286 "superblock": false, 00:15:56.286 "num_base_bdevs": 4, 00:15:56.286 "num_base_bdevs_discovered": 4, 00:15:56.286 "num_base_bdevs_operational": 4, 00:15:56.286 "process": { 00:15:56.286 "type": "rebuild", 00:15:56.286 "target": "spare", 00:15:56.286 "progress": { 00:15:56.286 "blocks": 151680, 00:15:56.286 "percent": 77 00:15:56.286 } 00:15:56.286 }, 00:15:56.286 "base_bdevs_list": [ 00:15:56.286 { 00:15:56.286 "name": "spare", 00:15:56.286 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:56.286 "is_configured": true, 00:15:56.286 "data_offset": 0, 00:15:56.286 "data_size": 65536 00:15:56.286 }, 00:15:56.286 { 00:15:56.286 "name": "BaseBdev2", 00:15:56.286 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:56.286 "is_configured": true, 00:15:56.286 "data_offset": 0, 00:15:56.286 "data_size": 65536 00:15:56.286 }, 00:15:56.286 { 00:15:56.286 "name": "BaseBdev3", 00:15:56.286 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:56.286 "is_configured": true, 00:15:56.286 "data_offset": 0, 00:15:56.286 "data_size": 65536 00:15:56.286 }, 00:15:56.286 { 00:15:56.286 "name": "BaseBdev4", 00:15:56.286 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:56.286 "is_configured": true, 00:15:56.286 "data_offset": 0, 00:15:56.286 "data_size": 65536 00:15:56.286 } 00:15:56.286 ] 00:15:56.286 }' 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:56.286 18:56:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.225 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:57.225 "name": "raid_bdev1", 00:15:57.225 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:57.225 "strip_size_kb": 64, 00:15:57.225 "state": "online", 00:15:57.225 "raid_level": "raid5f", 00:15:57.225 "superblock": false, 00:15:57.225 "num_base_bdevs": 4, 00:15:57.225 "num_base_bdevs_discovered": 4, 00:15:57.225 "num_base_bdevs_operational": 4, 00:15:57.225 "process": { 00:15:57.225 "type": "rebuild", 00:15:57.225 "target": "spare", 00:15:57.225 "progress": { 00:15:57.225 "blocks": 172800, 00:15:57.225 "percent": 87 00:15:57.225 } 00:15:57.225 }, 00:15:57.225 "base_bdevs_list": [ 00:15:57.225 { 00:15:57.225 "name": "spare", 00:15:57.225 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:57.225 "is_configured": true, 00:15:57.225 "data_offset": 0, 00:15:57.225 "data_size": 65536 00:15:57.225 }, 00:15:57.225 { 00:15:57.225 "name": "BaseBdev2", 00:15:57.225 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:57.225 "is_configured": true, 00:15:57.226 "data_offset": 0, 00:15:57.226 "data_size": 65536 00:15:57.226 }, 00:15:57.226 { 00:15:57.226 "name": "BaseBdev3", 00:15:57.226 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:57.226 "is_configured": true, 00:15:57.226 "data_offset": 0, 00:15:57.226 "data_size": 65536 00:15:57.226 }, 00:15:57.226 { 00:15:57.226 "name": "BaseBdev4", 00:15:57.226 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:57.226 "is_configured": true, 00:15:57.226 "data_offset": 0, 00:15:57.226 "data_size": 65536 00:15:57.226 } 00:15:57.226 ] 00:15:57.226 }' 00:15:57.226 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:57.226 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:57.226 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:57.226 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:57.226 18:56:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:58.165 18:56:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.426 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.426 "name": "raid_bdev1", 00:15:58.426 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:58.426 "strip_size_kb": 64, 00:15:58.426 "state": "online", 00:15:58.426 "raid_level": "raid5f", 00:15:58.426 "superblock": false, 00:15:58.426 "num_base_bdevs": 4, 00:15:58.426 "num_base_bdevs_discovered": 4, 00:15:58.426 "num_base_bdevs_operational": 4, 00:15:58.426 "process": { 00:15:58.426 "type": "rebuild", 00:15:58.426 "target": "spare", 00:15:58.426 "progress": { 00:15:58.426 "blocks": 195840, 00:15:58.426 "percent": 99 00:15:58.426 } 00:15:58.426 }, 00:15:58.426 "base_bdevs_list": [ 00:15:58.426 { 00:15:58.426 "name": "spare", 00:15:58.426 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:58.426 "is_configured": true, 00:15:58.426 "data_offset": 0, 00:15:58.426 "data_size": 65536 00:15:58.426 }, 00:15:58.426 { 00:15:58.426 "name": "BaseBdev2", 00:15:58.426 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:58.426 "is_configured": true, 00:15:58.426 "data_offset": 0, 00:15:58.426 "data_size": 65536 00:15:58.426 }, 00:15:58.426 { 00:15:58.426 "name": "BaseBdev3", 00:15:58.426 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:58.426 "is_configured": true, 00:15:58.426 "data_offset": 0, 00:15:58.426 "data_size": 65536 00:15:58.426 }, 00:15:58.426 { 00:15:58.426 "name": "BaseBdev4", 00:15:58.426 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:58.426 "is_configured": true, 00:15:58.426 "data_offset": 0, 00:15:58.426 "data_size": 65536 00:15:58.426 } 00:15:58.426 ] 00:15:58.426 }' 00:15:58.426 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.426 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:58.426 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.426 [2024-11-16 18:56:41.718565] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:58.426 [2024-11-16 18:56:41.718686] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:58.426 [2024-11-16 18:56:41.718758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.426 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:58.426 18:56:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.541 "name": "raid_bdev1", 00:15:59.541 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:59.541 "strip_size_kb": 64, 00:15:59.541 "state": "online", 00:15:59.541 "raid_level": "raid5f", 00:15:59.541 "superblock": false, 00:15:59.541 "num_base_bdevs": 4, 00:15:59.541 "num_base_bdevs_discovered": 4, 00:15:59.541 "num_base_bdevs_operational": 4, 00:15:59.541 "base_bdevs_list": [ 00:15:59.541 { 00:15:59.541 "name": "spare", 00:15:59.541 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 0, 00:15:59.541 "data_size": 65536 00:15:59.541 }, 00:15:59.541 { 00:15:59.541 "name": "BaseBdev2", 00:15:59.541 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 0, 00:15:59.541 "data_size": 65536 00:15:59.541 }, 00:15:59.541 { 00:15:59.541 "name": "BaseBdev3", 00:15:59.541 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 0, 00:15:59.541 "data_size": 65536 00:15:59.541 }, 00:15:59.541 { 00:15:59.541 "name": "BaseBdev4", 00:15:59.541 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 0, 00:15:59.541 "data_size": 65536 00:15:59.541 } 00:15:59.541 ] 00:15:59.541 }' 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.541 "name": "raid_bdev1", 00:15:59.541 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:59.541 "strip_size_kb": 64, 00:15:59.541 "state": "online", 00:15:59.541 "raid_level": "raid5f", 00:15:59.541 "superblock": false, 00:15:59.541 "num_base_bdevs": 4, 00:15:59.541 "num_base_bdevs_discovered": 4, 00:15:59.541 "num_base_bdevs_operational": 4, 00:15:59.541 "base_bdevs_list": [ 00:15:59.541 { 00:15:59.541 "name": "spare", 00:15:59.541 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 0, 00:15:59.541 "data_size": 65536 00:15:59.541 }, 00:15:59.541 { 00:15:59.541 "name": "BaseBdev2", 00:15:59.541 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 0, 00:15:59.541 "data_size": 65536 00:15:59.541 }, 00:15:59.541 { 00:15:59.541 "name": "BaseBdev3", 00:15:59.541 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 0, 00:15:59.541 "data_size": 65536 00:15:59.541 }, 00:15:59.541 { 00:15:59.541 "name": "BaseBdev4", 00:15:59.541 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:59.541 "is_configured": true, 00:15:59.541 "data_offset": 0, 00:15:59.541 "data_size": 65536 00:15:59.541 } 00:15:59.541 ] 00:15:59.541 }' 00:15:59.541 18:56:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.812 "name": "raid_bdev1", 00:15:59.812 "uuid": "5a081627-1977-49d5-a1f9-dc07822b2c67", 00:15:59.812 "strip_size_kb": 64, 00:15:59.812 "state": "online", 00:15:59.812 "raid_level": "raid5f", 00:15:59.812 "superblock": false, 00:15:59.812 "num_base_bdevs": 4, 00:15:59.812 "num_base_bdevs_discovered": 4, 00:15:59.812 "num_base_bdevs_operational": 4, 00:15:59.812 "base_bdevs_list": [ 00:15:59.812 { 00:15:59.812 "name": "spare", 00:15:59.812 "uuid": "42a83008-f94e-5fad-a439-3452aed79d1a", 00:15:59.812 "is_configured": true, 00:15:59.812 "data_offset": 0, 00:15:59.812 "data_size": 65536 00:15:59.812 }, 00:15:59.812 { 00:15:59.812 "name": "BaseBdev2", 00:15:59.812 "uuid": "8f2ecf10-d64f-5cee-9167-bf542a02af7a", 00:15:59.812 "is_configured": true, 00:15:59.812 "data_offset": 0, 00:15:59.812 "data_size": 65536 00:15:59.812 }, 00:15:59.812 { 00:15:59.812 "name": "BaseBdev3", 00:15:59.812 "uuid": "4ed9a3e5-2500-5644-a4ba-909e4b535a1f", 00:15:59.812 "is_configured": true, 00:15:59.812 "data_offset": 0, 00:15:59.812 "data_size": 65536 00:15:59.812 }, 00:15:59.812 { 00:15:59.812 "name": "BaseBdev4", 00:15:59.812 "uuid": "14e54acb-5ed2-5d4e-92fd-8ad159a7d8fc", 00:15:59.812 "is_configured": true, 00:15:59.812 "data_offset": 0, 00:15:59.812 "data_size": 65536 00:15:59.812 } 00:15:59.812 ] 00:15:59.812 }' 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.812 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.072 [2024-11-16 18:56:43.474107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:00.072 [2024-11-16 18:56:43.474187] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.072 [2024-11-16 18:56:43.474278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.072 [2024-11-16 18:56:43.474392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.072 [2024-11-16 18:56:43.474405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.072 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:00.332 /dev/nbd0 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.332 1+0 records in 00:16:00.332 1+0 records out 00:16:00.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000412887 s, 9.9 MB/s 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.332 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:00.592 /dev/nbd1 00:16:00.592 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:00.592 18:56:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:00.593 1+0 records in 00:16:00.593 1+0 records out 00:16:00.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345695 s, 11.8 MB/s 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:00.593 18:56:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:00.593 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:00.593 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:00.593 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:00.593 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:00.593 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:00.852 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:00.852 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:00.852 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:00.852 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:00.852 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:00.852 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:00.852 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84251 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84251 ']' 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84251 00:16:01.112 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:01.371 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:01.371 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84251 00:16:01.371 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:01.371 killing process with pid 84251 00:16:01.371 Received shutdown signal, test time was about 60.000000 seconds 00:16:01.371 00:16:01.371 Latency(us) 00:16:01.371 [2024-11-16T18:56:44.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.371 [2024-11-16T18:56:44.843Z] =================================================================================================================== 00:16:01.371 [2024-11-16T18:56:44.843Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:01.371 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:01.371 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84251' 00:16:01.371 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84251 00:16:01.371 [2024-11-16 18:56:44.621556] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:01.371 18:56:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84251 00:16:01.631 [2024-11-16 18:56:45.074045] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:03.012 00:16:03.012 real 0m19.579s 00:16:03.012 user 0m23.329s 00:16:03.012 sys 0m2.077s 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.012 ************************************ 00:16:03.012 END TEST raid5f_rebuild_test 00:16:03.012 ************************************ 00:16:03.012 18:56:46 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:03.012 18:56:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:03.012 18:56:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:03.012 18:56:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:03.012 ************************************ 00:16:03.012 START TEST raid5f_rebuild_test_sb 00:16:03.012 ************************************ 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:03.012 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84775 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84775 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84775 ']' 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:03.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:03.013 18:56:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.013 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:03.013 Zero copy mechanism will not be used. 00:16:03.013 [2024-11-16 18:56:46.260022] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:03.013 [2024-11-16 18:56:46.260136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84775 ] 00:16:03.013 [2024-11-16 18:56:46.431406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:03.273 [2024-11-16 18:56:46.532006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.273 [2024-11-16 18:56:46.720998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.273 [2024-11-16 18:56:46.721036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.843 BaseBdev1_malloc 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.843 [2024-11-16 18:56:47.116253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:03.843 [2024-11-16 18:56:47.116330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.843 [2024-11-16 18:56:47.116355] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:03.843 [2024-11-16 18:56:47.116366] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.843 [2024-11-16 18:56:47.118389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.843 [2024-11-16 18:56:47.118424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:03.843 BaseBdev1 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.843 BaseBdev2_malloc 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.843 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.843 [2024-11-16 18:56:47.168142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:03.843 [2024-11-16 18:56:47.168190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.843 [2024-11-16 18:56:47.168207] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:03.844 [2024-11-16 18:56:47.168219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.844 [2024-11-16 18:56:47.170149] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.844 [2024-11-16 18:56:47.170182] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:03.844 BaseBdev2 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 BaseBdev3_malloc 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 [2024-11-16 18:56:47.248630] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:03.844 [2024-11-16 18:56:47.248686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.844 [2024-11-16 18:56:47.248706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:03.844 [2024-11-16 18:56:47.248716] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.844 [2024-11-16 18:56:47.250608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.844 [2024-11-16 18:56:47.250644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:03.844 BaseBdev3 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 BaseBdev4_malloc 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.844 [2024-11-16 18:56:47.303385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:03.844 [2024-11-16 18:56:47.303444] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.844 [2024-11-16 18:56:47.303461] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:03.844 [2024-11-16 18:56:47.303470] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.844 [2024-11-16 18:56:47.305413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.844 [2024-11-16 18:56:47.305449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:03.844 BaseBdev4 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.844 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.105 spare_malloc 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.105 spare_delay 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.105 [2024-11-16 18:56:47.368727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:04.105 [2024-11-16 18:56:47.368773] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:04.105 [2024-11-16 18:56:47.368791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:04.105 [2024-11-16 18:56:47.368801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:04.105 [2024-11-16 18:56:47.370762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:04.105 [2024-11-16 18:56:47.370794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:04.105 spare 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.105 [2024-11-16 18:56:47.380773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.105 [2024-11-16 18:56:47.382532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:04.105 [2024-11-16 18:56:47.382592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.105 [2024-11-16 18:56:47.382640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:04.105 [2024-11-16 18:56:47.382822] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:04.105 [2024-11-16 18:56:47.382866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:04.105 [2024-11-16 18:56:47.383088] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:04.105 [2024-11-16 18:56:47.389953] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:04.105 [2024-11-16 18:56:47.389975] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:04.105 [2024-11-16 18:56:47.390160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.105 "name": "raid_bdev1", 00:16:04.105 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:04.105 "strip_size_kb": 64, 00:16:04.105 "state": "online", 00:16:04.105 "raid_level": "raid5f", 00:16:04.105 "superblock": true, 00:16:04.105 "num_base_bdevs": 4, 00:16:04.105 "num_base_bdevs_discovered": 4, 00:16:04.105 "num_base_bdevs_operational": 4, 00:16:04.105 "base_bdevs_list": [ 00:16:04.105 { 00:16:04.105 "name": "BaseBdev1", 00:16:04.105 "uuid": "5c10fe7e-2f0b-54fc-8f1c-7844228c29fd", 00:16:04.105 "is_configured": true, 00:16:04.105 "data_offset": 2048, 00:16:04.105 "data_size": 63488 00:16:04.105 }, 00:16:04.105 { 00:16:04.105 "name": "BaseBdev2", 00:16:04.105 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:04.105 "is_configured": true, 00:16:04.105 "data_offset": 2048, 00:16:04.105 "data_size": 63488 00:16:04.105 }, 00:16:04.105 { 00:16:04.105 "name": "BaseBdev3", 00:16:04.105 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:04.105 "is_configured": true, 00:16:04.105 "data_offset": 2048, 00:16:04.105 "data_size": 63488 00:16:04.105 }, 00:16:04.105 { 00:16:04.105 "name": "BaseBdev4", 00:16:04.105 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:04.105 "is_configured": true, 00:16:04.105 "data_offset": 2048, 00:16:04.105 "data_size": 63488 00:16:04.105 } 00:16:04.105 ] 00:16:04.105 }' 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.105 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.366 [2024-11-16 18:56:47.781897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.366 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.627 18:56:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:04.627 [2024-11-16 18:56:48.049324] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:04.627 /dev/nbd0 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:04.887 1+0 records in 00:16:04.887 1+0 records out 00:16:04.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363728 s, 11.3 MB/s 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:04.887 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:05.147 496+0 records in 00:16:05.147 496+0 records out 00:16:05.147 97517568 bytes (98 MB, 93 MiB) copied, 0.421451 s, 231 MB/s 00:16:05.147 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:05.147 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.147 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:05.147 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.147 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:05.147 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.147 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:05.407 [2024-11-16 18:56:48.754313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.407 [2024-11-16 18:56:48.788269] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.407 "name": "raid_bdev1", 00:16:05.407 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:05.407 "strip_size_kb": 64, 00:16:05.407 "state": "online", 00:16:05.407 "raid_level": "raid5f", 00:16:05.407 "superblock": true, 00:16:05.407 "num_base_bdevs": 4, 00:16:05.407 "num_base_bdevs_discovered": 3, 00:16:05.407 "num_base_bdevs_operational": 3, 00:16:05.407 "base_bdevs_list": [ 00:16:05.407 { 00:16:05.407 "name": null, 00:16:05.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.407 "is_configured": false, 00:16:05.407 "data_offset": 0, 00:16:05.407 "data_size": 63488 00:16:05.407 }, 00:16:05.407 { 00:16:05.407 "name": "BaseBdev2", 00:16:05.407 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:05.407 "is_configured": true, 00:16:05.407 "data_offset": 2048, 00:16:05.407 "data_size": 63488 00:16:05.407 }, 00:16:05.407 { 00:16:05.407 "name": "BaseBdev3", 00:16:05.407 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:05.407 "is_configured": true, 00:16:05.407 "data_offset": 2048, 00:16:05.407 "data_size": 63488 00:16:05.407 }, 00:16:05.407 { 00:16:05.407 "name": "BaseBdev4", 00:16:05.407 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:05.407 "is_configured": true, 00:16:05.407 "data_offset": 2048, 00:16:05.407 "data_size": 63488 00:16:05.407 } 00:16:05.407 ] 00:16:05.407 }' 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.407 18:56:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.976 18:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:05.976 18:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.976 18:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.976 [2024-11-16 18:56:49.199557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:05.976 [2024-11-16 18:56:49.215073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:05.976 18:56:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.976 18:56:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:05.976 [2024-11-16 18:56:49.223895] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.916 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:06.916 "name": "raid_bdev1", 00:16:06.916 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:06.916 "strip_size_kb": 64, 00:16:06.916 "state": "online", 00:16:06.916 "raid_level": "raid5f", 00:16:06.916 "superblock": true, 00:16:06.916 "num_base_bdevs": 4, 00:16:06.916 "num_base_bdevs_discovered": 4, 00:16:06.916 "num_base_bdevs_operational": 4, 00:16:06.916 "process": { 00:16:06.916 "type": "rebuild", 00:16:06.916 "target": "spare", 00:16:06.916 "progress": { 00:16:06.916 "blocks": 19200, 00:16:06.916 "percent": 10 00:16:06.916 } 00:16:06.916 }, 00:16:06.916 "base_bdevs_list": [ 00:16:06.917 { 00:16:06.917 "name": "spare", 00:16:06.917 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:06.917 "is_configured": true, 00:16:06.917 "data_offset": 2048, 00:16:06.917 "data_size": 63488 00:16:06.917 }, 00:16:06.917 { 00:16:06.917 "name": "BaseBdev2", 00:16:06.917 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:06.917 "is_configured": true, 00:16:06.917 "data_offset": 2048, 00:16:06.917 "data_size": 63488 00:16:06.917 }, 00:16:06.917 { 00:16:06.917 "name": "BaseBdev3", 00:16:06.917 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:06.917 "is_configured": true, 00:16:06.917 "data_offset": 2048, 00:16:06.917 "data_size": 63488 00:16:06.917 }, 00:16:06.917 { 00:16:06.917 "name": "BaseBdev4", 00:16:06.917 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:06.917 "is_configured": true, 00:16:06.917 "data_offset": 2048, 00:16:06.917 "data_size": 63488 00:16:06.917 } 00:16:06.917 ] 00:16:06.917 }' 00:16:06.917 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:06.917 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:06.917 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:06.917 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:06.917 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:06.917 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.917 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.917 [2024-11-16 18:56:50.350455] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.177 [2024-11-16 18:56:50.429449] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:07.177 [2024-11-16 18:56:50.429507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:07.177 [2024-11-16 18:56:50.429522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.177 [2024-11-16 18:56:50.429540] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.177 "name": "raid_bdev1", 00:16:07.177 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:07.177 "strip_size_kb": 64, 00:16:07.177 "state": "online", 00:16:07.177 "raid_level": "raid5f", 00:16:07.177 "superblock": true, 00:16:07.177 "num_base_bdevs": 4, 00:16:07.177 "num_base_bdevs_discovered": 3, 00:16:07.177 "num_base_bdevs_operational": 3, 00:16:07.177 "base_bdevs_list": [ 00:16:07.177 { 00:16:07.177 "name": null, 00:16:07.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.177 "is_configured": false, 00:16:07.177 "data_offset": 0, 00:16:07.177 "data_size": 63488 00:16:07.177 }, 00:16:07.177 { 00:16:07.177 "name": "BaseBdev2", 00:16:07.177 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:07.177 "is_configured": true, 00:16:07.177 "data_offset": 2048, 00:16:07.177 "data_size": 63488 00:16:07.177 }, 00:16:07.177 { 00:16:07.177 "name": "BaseBdev3", 00:16:07.177 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:07.177 "is_configured": true, 00:16:07.177 "data_offset": 2048, 00:16:07.177 "data_size": 63488 00:16:07.177 }, 00:16:07.177 { 00:16:07.177 "name": "BaseBdev4", 00:16:07.177 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:07.177 "is_configured": true, 00:16:07.177 "data_offset": 2048, 00:16:07.177 "data_size": 63488 00:16:07.177 } 00:16:07.177 ] 00:16:07.177 }' 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.177 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.437 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.698 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.698 "name": "raid_bdev1", 00:16:07.698 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:07.698 "strip_size_kb": 64, 00:16:07.698 "state": "online", 00:16:07.698 "raid_level": "raid5f", 00:16:07.698 "superblock": true, 00:16:07.698 "num_base_bdevs": 4, 00:16:07.698 "num_base_bdevs_discovered": 3, 00:16:07.698 "num_base_bdevs_operational": 3, 00:16:07.698 "base_bdevs_list": [ 00:16:07.698 { 00:16:07.698 "name": null, 00:16:07.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.698 "is_configured": false, 00:16:07.698 "data_offset": 0, 00:16:07.698 "data_size": 63488 00:16:07.698 }, 00:16:07.698 { 00:16:07.698 "name": "BaseBdev2", 00:16:07.698 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 }, 00:16:07.698 { 00:16:07.698 "name": "BaseBdev3", 00:16:07.698 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 }, 00:16:07.698 { 00:16:07.698 "name": "BaseBdev4", 00:16:07.698 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:07.698 "is_configured": true, 00:16:07.698 "data_offset": 2048, 00:16:07.698 "data_size": 63488 00:16:07.698 } 00:16:07.698 ] 00:16:07.698 }' 00:16:07.698 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.698 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.698 18:56:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.698 18:56:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.698 18:56:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.698 18:56:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.698 18:56:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.698 [2024-11-16 18:56:51.025271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.698 [2024-11-16 18:56:51.039149] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:07.698 18:56:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.698 18:56:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:07.698 [2024-11-16 18:56:51.048095] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.637 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.637 "name": "raid_bdev1", 00:16:08.637 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:08.637 "strip_size_kb": 64, 00:16:08.637 "state": "online", 00:16:08.637 "raid_level": "raid5f", 00:16:08.637 "superblock": true, 00:16:08.637 "num_base_bdevs": 4, 00:16:08.637 "num_base_bdevs_discovered": 4, 00:16:08.637 "num_base_bdevs_operational": 4, 00:16:08.637 "process": { 00:16:08.637 "type": "rebuild", 00:16:08.637 "target": "spare", 00:16:08.637 "progress": { 00:16:08.637 "blocks": 19200, 00:16:08.637 "percent": 10 00:16:08.637 } 00:16:08.637 }, 00:16:08.637 "base_bdevs_list": [ 00:16:08.637 { 00:16:08.637 "name": "spare", 00:16:08.637 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:08.637 "is_configured": true, 00:16:08.637 "data_offset": 2048, 00:16:08.637 "data_size": 63488 00:16:08.637 }, 00:16:08.637 { 00:16:08.637 "name": "BaseBdev2", 00:16:08.637 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:08.637 "is_configured": true, 00:16:08.637 "data_offset": 2048, 00:16:08.637 "data_size": 63488 00:16:08.637 }, 00:16:08.637 { 00:16:08.638 "name": "BaseBdev3", 00:16:08.638 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:08.638 "is_configured": true, 00:16:08.638 "data_offset": 2048, 00:16:08.638 "data_size": 63488 00:16:08.638 }, 00:16:08.638 { 00:16:08.638 "name": "BaseBdev4", 00:16:08.638 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:08.638 "is_configured": true, 00:16:08.638 "data_offset": 2048, 00:16:08.638 "data_size": 63488 00:16:08.638 } 00:16:08.638 ] 00:16:08.638 }' 00:16:08.638 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:08.897 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=614 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.897 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.898 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.898 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.898 "name": "raid_bdev1", 00:16:08.898 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:08.898 "strip_size_kb": 64, 00:16:08.898 "state": "online", 00:16:08.898 "raid_level": "raid5f", 00:16:08.898 "superblock": true, 00:16:08.898 "num_base_bdevs": 4, 00:16:08.898 "num_base_bdevs_discovered": 4, 00:16:08.898 "num_base_bdevs_operational": 4, 00:16:08.898 "process": { 00:16:08.898 "type": "rebuild", 00:16:08.898 "target": "spare", 00:16:08.898 "progress": { 00:16:08.898 "blocks": 21120, 00:16:08.898 "percent": 11 00:16:08.898 } 00:16:08.898 }, 00:16:08.898 "base_bdevs_list": [ 00:16:08.898 { 00:16:08.898 "name": "spare", 00:16:08.898 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:08.898 "is_configured": true, 00:16:08.898 "data_offset": 2048, 00:16:08.898 "data_size": 63488 00:16:08.898 }, 00:16:08.898 { 00:16:08.898 "name": "BaseBdev2", 00:16:08.898 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:08.898 "is_configured": true, 00:16:08.898 "data_offset": 2048, 00:16:08.898 "data_size": 63488 00:16:08.898 }, 00:16:08.898 { 00:16:08.898 "name": "BaseBdev3", 00:16:08.898 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:08.898 "is_configured": true, 00:16:08.898 "data_offset": 2048, 00:16:08.898 "data_size": 63488 00:16:08.898 }, 00:16:08.898 { 00:16:08.898 "name": "BaseBdev4", 00:16:08.898 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:08.898 "is_configured": true, 00:16:08.898 "data_offset": 2048, 00:16:08.898 "data_size": 63488 00:16:08.898 } 00:16:08.898 ] 00:16:08.898 }' 00:16:08.898 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.898 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.898 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.898 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.898 18:56:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.280 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.280 "name": "raid_bdev1", 00:16:10.280 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:10.280 "strip_size_kb": 64, 00:16:10.280 "state": "online", 00:16:10.280 "raid_level": "raid5f", 00:16:10.280 "superblock": true, 00:16:10.280 "num_base_bdevs": 4, 00:16:10.281 "num_base_bdevs_discovered": 4, 00:16:10.281 "num_base_bdevs_operational": 4, 00:16:10.281 "process": { 00:16:10.281 "type": "rebuild", 00:16:10.281 "target": "spare", 00:16:10.281 "progress": { 00:16:10.281 "blocks": 44160, 00:16:10.281 "percent": 23 00:16:10.281 } 00:16:10.281 }, 00:16:10.281 "base_bdevs_list": [ 00:16:10.281 { 00:16:10.281 "name": "spare", 00:16:10.281 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:10.281 "is_configured": true, 00:16:10.281 "data_offset": 2048, 00:16:10.281 "data_size": 63488 00:16:10.281 }, 00:16:10.281 { 00:16:10.281 "name": "BaseBdev2", 00:16:10.281 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:10.281 "is_configured": true, 00:16:10.281 "data_offset": 2048, 00:16:10.281 "data_size": 63488 00:16:10.281 }, 00:16:10.281 { 00:16:10.281 "name": "BaseBdev3", 00:16:10.281 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:10.281 "is_configured": true, 00:16:10.281 "data_offset": 2048, 00:16:10.281 "data_size": 63488 00:16:10.281 }, 00:16:10.281 { 00:16:10.281 "name": "BaseBdev4", 00:16:10.281 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:10.281 "is_configured": true, 00:16:10.281 "data_offset": 2048, 00:16:10.281 "data_size": 63488 00:16:10.281 } 00:16:10.281 ] 00:16:10.281 }' 00:16:10.281 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.281 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.281 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.281 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.281 18:56:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.220 "name": "raid_bdev1", 00:16:11.220 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:11.220 "strip_size_kb": 64, 00:16:11.220 "state": "online", 00:16:11.220 "raid_level": "raid5f", 00:16:11.220 "superblock": true, 00:16:11.220 "num_base_bdevs": 4, 00:16:11.220 "num_base_bdevs_discovered": 4, 00:16:11.220 "num_base_bdevs_operational": 4, 00:16:11.220 "process": { 00:16:11.220 "type": "rebuild", 00:16:11.220 "target": "spare", 00:16:11.220 "progress": { 00:16:11.220 "blocks": 65280, 00:16:11.220 "percent": 34 00:16:11.220 } 00:16:11.220 }, 00:16:11.220 "base_bdevs_list": [ 00:16:11.220 { 00:16:11.220 "name": "spare", 00:16:11.220 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:11.220 "is_configured": true, 00:16:11.220 "data_offset": 2048, 00:16:11.220 "data_size": 63488 00:16:11.220 }, 00:16:11.220 { 00:16:11.220 "name": "BaseBdev2", 00:16:11.220 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:11.220 "is_configured": true, 00:16:11.220 "data_offset": 2048, 00:16:11.220 "data_size": 63488 00:16:11.220 }, 00:16:11.220 { 00:16:11.220 "name": "BaseBdev3", 00:16:11.220 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:11.220 "is_configured": true, 00:16:11.220 "data_offset": 2048, 00:16:11.220 "data_size": 63488 00:16:11.220 }, 00:16:11.220 { 00:16:11.220 "name": "BaseBdev4", 00:16:11.220 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:11.220 "is_configured": true, 00:16:11.220 "data_offset": 2048, 00:16:11.220 "data_size": 63488 00:16:11.220 } 00:16:11.220 ] 00:16:11.220 }' 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:11.220 18:56:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.159 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.419 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:12.419 "name": "raid_bdev1", 00:16:12.419 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:12.419 "strip_size_kb": 64, 00:16:12.419 "state": "online", 00:16:12.419 "raid_level": "raid5f", 00:16:12.419 "superblock": true, 00:16:12.419 "num_base_bdevs": 4, 00:16:12.419 "num_base_bdevs_discovered": 4, 00:16:12.419 "num_base_bdevs_operational": 4, 00:16:12.419 "process": { 00:16:12.419 "type": "rebuild", 00:16:12.419 "target": "spare", 00:16:12.419 "progress": { 00:16:12.419 "blocks": 86400, 00:16:12.419 "percent": 45 00:16:12.419 } 00:16:12.419 }, 00:16:12.419 "base_bdevs_list": [ 00:16:12.419 { 00:16:12.419 "name": "spare", 00:16:12.419 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:12.419 "is_configured": true, 00:16:12.419 "data_offset": 2048, 00:16:12.419 "data_size": 63488 00:16:12.419 }, 00:16:12.419 { 00:16:12.419 "name": "BaseBdev2", 00:16:12.419 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:12.419 "is_configured": true, 00:16:12.419 "data_offset": 2048, 00:16:12.419 "data_size": 63488 00:16:12.419 }, 00:16:12.419 { 00:16:12.419 "name": "BaseBdev3", 00:16:12.419 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:12.419 "is_configured": true, 00:16:12.419 "data_offset": 2048, 00:16:12.419 "data_size": 63488 00:16:12.419 }, 00:16:12.419 { 00:16:12.419 "name": "BaseBdev4", 00:16:12.419 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:12.419 "is_configured": true, 00:16:12.419 "data_offset": 2048, 00:16:12.419 "data_size": 63488 00:16:12.419 } 00:16:12.419 ] 00:16:12.419 }' 00:16:12.419 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:12.419 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:12.419 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:12.419 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:12.419 18:56:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.358 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.358 "name": "raid_bdev1", 00:16:13.358 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:13.358 "strip_size_kb": 64, 00:16:13.358 "state": "online", 00:16:13.358 "raid_level": "raid5f", 00:16:13.358 "superblock": true, 00:16:13.358 "num_base_bdevs": 4, 00:16:13.358 "num_base_bdevs_discovered": 4, 00:16:13.358 "num_base_bdevs_operational": 4, 00:16:13.358 "process": { 00:16:13.358 "type": "rebuild", 00:16:13.358 "target": "spare", 00:16:13.358 "progress": { 00:16:13.358 "blocks": 107520, 00:16:13.358 "percent": 56 00:16:13.358 } 00:16:13.358 }, 00:16:13.358 "base_bdevs_list": [ 00:16:13.358 { 00:16:13.358 "name": "spare", 00:16:13.358 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:13.358 "is_configured": true, 00:16:13.358 "data_offset": 2048, 00:16:13.358 "data_size": 63488 00:16:13.358 }, 00:16:13.358 { 00:16:13.358 "name": "BaseBdev2", 00:16:13.358 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:13.358 "is_configured": true, 00:16:13.358 "data_offset": 2048, 00:16:13.358 "data_size": 63488 00:16:13.358 }, 00:16:13.358 { 00:16:13.358 "name": "BaseBdev3", 00:16:13.358 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:13.358 "is_configured": true, 00:16:13.358 "data_offset": 2048, 00:16:13.358 "data_size": 63488 00:16:13.358 }, 00:16:13.358 { 00:16:13.358 "name": "BaseBdev4", 00:16:13.358 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:13.358 "is_configured": true, 00:16:13.358 "data_offset": 2048, 00:16:13.359 "data_size": 63488 00:16:13.359 } 00:16:13.359 ] 00:16:13.359 }' 00:16:13.359 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.359 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.359 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.617 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.617 18:56:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.555 "name": "raid_bdev1", 00:16:14.555 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:14.555 "strip_size_kb": 64, 00:16:14.555 "state": "online", 00:16:14.555 "raid_level": "raid5f", 00:16:14.555 "superblock": true, 00:16:14.555 "num_base_bdevs": 4, 00:16:14.555 "num_base_bdevs_discovered": 4, 00:16:14.555 "num_base_bdevs_operational": 4, 00:16:14.555 "process": { 00:16:14.555 "type": "rebuild", 00:16:14.555 "target": "spare", 00:16:14.555 "progress": { 00:16:14.555 "blocks": 130560, 00:16:14.555 "percent": 68 00:16:14.555 } 00:16:14.555 }, 00:16:14.555 "base_bdevs_list": [ 00:16:14.555 { 00:16:14.555 "name": "spare", 00:16:14.555 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:14.555 "is_configured": true, 00:16:14.555 "data_offset": 2048, 00:16:14.555 "data_size": 63488 00:16:14.555 }, 00:16:14.555 { 00:16:14.555 "name": "BaseBdev2", 00:16:14.555 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:14.555 "is_configured": true, 00:16:14.555 "data_offset": 2048, 00:16:14.555 "data_size": 63488 00:16:14.555 }, 00:16:14.555 { 00:16:14.555 "name": "BaseBdev3", 00:16:14.555 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:14.555 "is_configured": true, 00:16:14.555 "data_offset": 2048, 00:16:14.555 "data_size": 63488 00:16:14.555 }, 00:16:14.555 { 00:16:14.555 "name": "BaseBdev4", 00:16:14.555 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:14.555 "is_configured": true, 00:16:14.555 "data_offset": 2048, 00:16:14.555 "data_size": 63488 00:16:14.555 } 00:16:14.555 ] 00:16:14.555 }' 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:14.555 18:56:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.815 18:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:14.815 18:56:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.754 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.754 "name": "raid_bdev1", 00:16:15.754 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:15.754 "strip_size_kb": 64, 00:16:15.754 "state": "online", 00:16:15.754 "raid_level": "raid5f", 00:16:15.754 "superblock": true, 00:16:15.754 "num_base_bdevs": 4, 00:16:15.754 "num_base_bdevs_discovered": 4, 00:16:15.754 "num_base_bdevs_operational": 4, 00:16:15.754 "process": { 00:16:15.754 "type": "rebuild", 00:16:15.754 "target": "spare", 00:16:15.754 "progress": { 00:16:15.754 "blocks": 151680, 00:16:15.754 "percent": 79 00:16:15.754 } 00:16:15.754 }, 00:16:15.755 "base_bdevs_list": [ 00:16:15.755 { 00:16:15.755 "name": "spare", 00:16:15.755 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:15.755 "is_configured": true, 00:16:15.755 "data_offset": 2048, 00:16:15.755 "data_size": 63488 00:16:15.755 }, 00:16:15.755 { 00:16:15.755 "name": "BaseBdev2", 00:16:15.755 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:15.755 "is_configured": true, 00:16:15.755 "data_offset": 2048, 00:16:15.755 "data_size": 63488 00:16:15.755 }, 00:16:15.755 { 00:16:15.755 "name": "BaseBdev3", 00:16:15.755 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:15.755 "is_configured": true, 00:16:15.755 "data_offset": 2048, 00:16:15.755 "data_size": 63488 00:16:15.755 }, 00:16:15.755 { 00:16:15.755 "name": "BaseBdev4", 00:16:15.755 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:15.755 "is_configured": true, 00:16:15.755 "data_offset": 2048, 00:16:15.755 "data_size": 63488 00:16:15.755 } 00:16:15.755 ] 00:16:15.755 }' 00:16:15.755 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.755 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.755 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.755 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.755 18:56:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.137 "name": "raid_bdev1", 00:16:17.137 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:17.137 "strip_size_kb": 64, 00:16:17.137 "state": "online", 00:16:17.137 "raid_level": "raid5f", 00:16:17.137 "superblock": true, 00:16:17.137 "num_base_bdevs": 4, 00:16:17.137 "num_base_bdevs_discovered": 4, 00:16:17.137 "num_base_bdevs_operational": 4, 00:16:17.137 "process": { 00:16:17.137 "type": "rebuild", 00:16:17.137 "target": "spare", 00:16:17.137 "progress": { 00:16:17.137 "blocks": 174720, 00:16:17.137 "percent": 91 00:16:17.137 } 00:16:17.137 }, 00:16:17.137 "base_bdevs_list": [ 00:16:17.137 { 00:16:17.137 "name": "spare", 00:16:17.137 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:17.137 "is_configured": true, 00:16:17.137 "data_offset": 2048, 00:16:17.137 "data_size": 63488 00:16:17.137 }, 00:16:17.137 { 00:16:17.137 "name": "BaseBdev2", 00:16:17.137 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:17.137 "is_configured": true, 00:16:17.137 "data_offset": 2048, 00:16:17.137 "data_size": 63488 00:16:17.137 }, 00:16:17.137 { 00:16:17.137 "name": "BaseBdev3", 00:16:17.137 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:17.137 "is_configured": true, 00:16:17.137 "data_offset": 2048, 00:16:17.137 "data_size": 63488 00:16:17.137 }, 00:16:17.137 { 00:16:17.137 "name": "BaseBdev4", 00:16:17.137 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:17.137 "is_configured": true, 00:16:17.137 "data_offset": 2048, 00:16:17.137 "data_size": 63488 00:16:17.137 } 00:16:17.137 ] 00:16:17.137 }' 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.137 18:57:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.720 [2024-11-16 18:57:01.090467] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:17.720 [2024-11-16 18:57:01.090532] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:17.720 [2024-11-16 18:57:01.090657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.980 "name": "raid_bdev1", 00:16:17.980 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:17.980 "strip_size_kb": 64, 00:16:17.980 "state": "online", 00:16:17.980 "raid_level": "raid5f", 00:16:17.980 "superblock": true, 00:16:17.980 "num_base_bdevs": 4, 00:16:17.980 "num_base_bdevs_discovered": 4, 00:16:17.980 "num_base_bdevs_operational": 4, 00:16:17.980 "base_bdevs_list": [ 00:16:17.980 { 00:16:17.980 "name": "spare", 00:16:17.980 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:17.980 "is_configured": true, 00:16:17.980 "data_offset": 2048, 00:16:17.980 "data_size": 63488 00:16:17.980 }, 00:16:17.980 { 00:16:17.980 "name": "BaseBdev2", 00:16:17.980 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:17.980 "is_configured": true, 00:16:17.980 "data_offset": 2048, 00:16:17.980 "data_size": 63488 00:16:17.980 }, 00:16:17.980 { 00:16:17.980 "name": "BaseBdev3", 00:16:17.980 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:17.980 "is_configured": true, 00:16:17.980 "data_offset": 2048, 00:16:17.980 "data_size": 63488 00:16:17.980 }, 00:16:17.980 { 00:16:17.980 "name": "BaseBdev4", 00:16:17.980 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:17.980 "is_configured": true, 00:16:17.980 "data_offset": 2048, 00:16:17.980 "data_size": 63488 00:16:17.980 } 00:16:17.980 ] 00:16:17.980 }' 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:17.980 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.240 "name": "raid_bdev1", 00:16:18.240 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:18.240 "strip_size_kb": 64, 00:16:18.240 "state": "online", 00:16:18.240 "raid_level": "raid5f", 00:16:18.240 "superblock": true, 00:16:18.240 "num_base_bdevs": 4, 00:16:18.240 "num_base_bdevs_discovered": 4, 00:16:18.240 "num_base_bdevs_operational": 4, 00:16:18.240 "base_bdevs_list": [ 00:16:18.240 { 00:16:18.240 "name": "spare", 00:16:18.240 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:18.240 "is_configured": true, 00:16:18.240 "data_offset": 2048, 00:16:18.240 "data_size": 63488 00:16:18.240 }, 00:16:18.240 { 00:16:18.240 "name": "BaseBdev2", 00:16:18.240 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:18.240 "is_configured": true, 00:16:18.240 "data_offset": 2048, 00:16:18.240 "data_size": 63488 00:16:18.240 }, 00:16:18.240 { 00:16:18.240 "name": "BaseBdev3", 00:16:18.240 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:18.240 "is_configured": true, 00:16:18.240 "data_offset": 2048, 00:16:18.240 "data_size": 63488 00:16:18.240 }, 00:16:18.240 { 00:16:18.240 "name": "BaseBdev4", 00:16:18.240 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:18.240 "is_configured": true, 00:16:18.240 "data_offset": 2048, 00:16:18.240 "data_size": 63488 00:16:18.240 } 00:16:18.240 ] 00:16:18.240 }' 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.240 "name": "raid_bdev1", 00:16:18.240 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:18.240 "strip_size_kb": 64, 00:16:18.240 "state": "online", 00:16:18.240 "raid_level": "raid5f", 00:16:18.240 "superblock": true, 00:16:18.240 "num_base_bdevs": 4, 00:16:18.240 "num_base_bdevs_discovered": 4, 00:16:18.240 "num_base_bdevs_operational": 4, 00:16:18.240 "base_bdevs_list": [ 00:16:18.240 { 00:16:18.240 "name": "spare", 00:16:18.240 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:18.240 "is_configured": true, 00:16:18.240 "data_offset": 2048, 00:16:18.240 "data_size": 63488 00:16:18.240 }, 00:16:18.240 { 00:16:18.240 "name": "BaseBdev2", 00:16:18.240 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:18.240 "is_configured": true, 00:16:18.240 "data_offset": 2048, 00:16:18.240 "data_size": 63488 00:16:18.240 }, 00:16:18.240 { 00:16:18.240 "name": "BaseBdev3", 00:16:18.240 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:18.240 "is_configured": true, 00:16:18.240 "data_offset": 2048, 00:16:18.240 "data_size": 63488 00:16:18.240 }, 00:16:18.240 { 00:16:18.240 "name": "BaseBdev4", 00:16:18.240 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:18.240 "is_configured": true, 00:16:18.240 "data_offset": 2048, 00:16:18.240 "data_size": 63488 00:16:18.240 } 00:16:18.240 ] 00:16:18.240 }' 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.240 18:57:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.811 [2024-11-16 18:57:02.033328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:18.811 [2024-11-16 18:57:02.033365] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:18.811 [2024-11-16 18:57:02.033441] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:18.811 [2024-11-16 18:57:02.033534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:18.811 [2024-11-16 18:57:02.033559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:18.811 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:18.811 /dev/nbd0 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.072 1+0 records in 00:16:19.072 1+0 records out 00:16:19.072 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202433 s, 20.2 MB/s 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.072 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:19.072 /dev/nbd1 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:19.332 1+0 records in 00:16:19.332 1+0 records out 00:16:19.332 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000415701 s, 9.9 MB/s 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.332 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:19.592 18:57:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.853 [2024-11-16 18:57:03.161029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:19.853 [2024-11-16 18:57:03.161085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:19.853 [2024-11-16 18:57:03.161110] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:19.853 [2024-11-16 18:57:03.161119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:19.853 [2024-11-16 18:57:03.163455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:19.853 [2024-11-16 18:57:03.163488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:19.853 [2024-11-16 18:57:03.163572] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:19.853 [2024-11-16 18:57:03.163622] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.853 [2024-11-16 18:57:03.163764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.853 [2024-11-16 18:57:03.163850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.853 [2024-11-16 18:57:03.163935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:19.853 spare 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.853 [2024-11-16 18:57:03.263851] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:19.853 [2024-11-16 18:57:03.263883] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:19.853 [2024-11-16 18:57:03.264153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:16:19.853 [2024-11-16 18:57:03.270985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:19.853 [2024-11-16 18:57:03.271008] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:19.853 [2024-11-16 18:57:03.271178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.853 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.113 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.113 "name": "raid_bdev1", 00:16:20.113 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:20.113 "strip_size_kb": 64, 00:16:20.113 "state": "online", 00:16:20.113 "raid_level": "raid5f", 00:16:20.113 "superblock": true, 00:16:20.113 "num_base_bdevs": 4, 00:16:20.113 "num_base_bdevs_discovered": 4, 00:16:20.113 "num_base_bdevs_operational": 4, 00:16:20.113 "base_bdevs_list": [ 00:16:20.113 { 00:16:20.113 "name": "spare", 00:16:20.113 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:20.113 "is_configured": true, 00:16:20.113 "data_offset": 2048, 00:16:20.113 "data_size": 63488 00:16:20.113 }, 00:16:20.113 { 00:16:20.113 "name": "BaseBdev2", 00:16:20.113 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:20.113 "is_configured": true, 00:16:20.113 "data_offset": 2048, 00:16:20.113 "data_size": 63488 00:16:20.113 }, 00:16:20.113 { 00:16:20.113 "name": "BaseBdev3", 00:16:20.113 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:20.113 "is_configured": true, 00:16:20.113 "data_offset": 2048, 00:16:20.113 "data_size": 63488 00:16:20.113 }, 00:16:20.113 { 00:16:20.113 "name": "BaseBdev4", 00:16:20.113 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:20.113 "is_configured": true, 00:16:20.113 "data_offset": 2048, 00:16:20.113 "data_size": 63488 00:16:20.113 } 00:16:20.113 ] 00:16:20.113 }' 00:16:20.113 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.113 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.373 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.373 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.373 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.373 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.373 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.373 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.373 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.373 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.374 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.374 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.374 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.374 "name": "raid_bdev1", 00:16:20.374 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:20.374 "strip_size_kb": 64, 00:16:20.374 "state": "online", 00:16:20.374 "raid_level": "raid5f", 00:16:20.374 "superblock": true, 00:16:20.374 "num_base_bdevs": 4, 00:16:20.374 "num_base_bdevs_discovered": 4, 00:16:20.374 "num_base_bdevs_operational": 4, 00:16:20.374 "base_bdevs_list": [ 00:16:20.374 { 00:16:20.374 "name": "spare", 00:16:20.374 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:20.374 "is_configured": true, 00:16:20.374 "data_offset": 2048, 00:16:20.374 "data_size": 63488 00:16:20.374 }, 00:16:20.374 { 00:16:20.374 "name": "BaseBdev2", 00:16:20.374 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:20.374 "is_configured": true, 00:16:20.374 "data_offset": 2048, 00:16:20.374 "data_size": 63488 00:16:20.374 }, 00:16:20.374 { 00:16:20.374 "name": "BaseBdev3", 00:16:20.374 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:20.374 "is_configured": true, 00:16:20.374 "data_offset": 2048, 00:16:20.374 "data_size": 63488 00:16:20.374 }, 00:16:20.374 { 00:16:20.374 "name": "BaseBdev4", 00:16:20.374 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:20.374 "is_configured": true, 00:16:20.374 "data_offset": 2048, 00:16:20.374 "data_size": 63488 00:16:20.374 } 00:16:20.374 ] 00:16:20.374 }' 00:16:20.374 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.634 [2024-11-16 18:57:03.954018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.634 18:57:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.634 18:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.634 "name": "raid_bdev1", 00:16:20.634 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:20.634 "strip_size_kb": 64, 00:16:20.634 "state": "online", 00:16:20.634 "raid_level": "raid5f", 00:16:20.634 "superblock": true, 00:16:20.634 "num_base_bdevs": 4, 00:16:20.634 "num_base_bdevs_discovered": 3, 00:16:20.634 "num_base_bdevs_operational": 3, 00:16:20.634 "base_bdevs_list": [ 00:16:20.634 { 00:16:20.634 "name": null, 00:16:20.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.634 "is_configured": false, 00:16:20.634 "data_offset": 0, 00:16:20.634 "data_size": 63488 00:16:20.634 }, 00:16:20.634 { 00:16:20.634 "name": "BaseBdev2", 00:16:20.634 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:20.634 "is_configured": true, 00:16:20.634 "data_offset": 2048, 00:16:20.634 "data_size": 63488 00:16:20.634 }, 00:16:20.634 { 00:16:20.634 "name": "BaseBdev3", 00:16:20.634 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:20.634 "is_configured": true, 00:16:20.634 "data_offset": 2048, 00:16:20.634 "data_size": 63488 00:16:20.634 }, 00:16:20.634 { 00:16:20.634 "name": "BaseBdev4", 00:16:20.634 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:20.634 "is_configured": true, 00:16:20.634 "data_offset": 2048, 00:16:20.634 "data_size": 63488 00:16:20.634 } 00:16:20.634 ] 00:16:20.634 }' 00:16:20.634 18:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.634 18:57:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.894 18:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:20.894 18:57:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.894 18:57:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.894 [2024-11-16 18:57:04.361325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:20.894 [2024-11-16 18:57:04.361511] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:20.894 [2024-11-16 18:57:04.361530] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:20.894 [2024-11-16 18:57:04.361561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.154 [2024-11-16 18:57:04.375625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:16:21.154 18:57:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.154 18:57:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:21.154 [2024-11-16 18:57:04.384470] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.096 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.096 "name": "raid_bdev1", 00:16:22.096 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:22.096 "strip_size_kb": 64, 00:16:22.096 "state": "online", 00:16:22.096 "raid_level": "raid5f", 00:16:22.096 "superblock": true, 00:16:22.096 "num_base_bdevs": 4, 00:16:22.096 "num_base_bdevs_discovered": 4, 00:16:22.096 "num_base_bdevs_operational": 4, 00:16:22.096 "process": { 00:16:22.096 "type": "rebuild", 00:16:22.096 "target": "spare", 00:16:22.096 "progress": { 00:16:22.096 "blocks": 19200, 00:16:22.096 "percent": 10 00:16:22.096 } 00:16:22.096 }, 00:16:22.096 "base_bdevs_list": [ 00:16:22.096 { 00:16:22.096 "name": "spare", 00:16:22.096 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:22.096 "is_configured": true, 00:16:22.096 "data_offset": 2048, 00:16:22.096 "data_size": 63488 00:16:22.096 }, 00:16:22.096 { 00:16:22.096 "name": "BaseBdev2", 00:16:22.096 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:22.097 "is_configured": true, 00:16:22.097 "data_offset": 2048, 00:16:22.097 "data_size": 63488 00:16:22.097 }, 00:16:22.097 { 00:16:22.097 "name": "BaseBdev3", 00:16:22.097 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:22.097 "is_configured": true, 00:16:22.097 "data_offset": 2048, 00:16:22.097 "data_size": 63488 00:16:22.097 }, 00:16:22.097 { 00:16:22.097 "name": "BaseBdev4", 00:16:22.097 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:22.097 "is_configured": true, 00:16:22.097 "data_offset": 2048, 00:16:22.097 "data_size": 63488 00:16:22.097 } 00:16:22.097 ] 00:16:22.097 }' 00:16:22.097 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.097 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.097 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.097 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.097 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:22.097 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.097 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.097 [2024-11-16 18:57:05.539302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.356 [2024-11-16 18:57:05.590293] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:22.356 [2024-11-16 18:57:05.590370] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:22.356 [2024-11-16 18:57:05.590385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:22.356 [2024-11-16 18:57:05.590394] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:22.356 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.357 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.357 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.357 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.357 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.357 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:22.357 "name": "raid_bdev1", 00:16:22.357 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:22.357 "strip_size_kb": 64, 00:16:22.357 "state": "online", 00:16:22.357 "raid_level": "raid5f", 00:16:22.357 "superblock": true, 00:16:22.357 "num_base_bdevs": 4, 00:16:22.357 "num_base_bdevs_discovered": 3, 00:16:22.357 "num_base_bdevs_operational": 3, 00:16:22.357 "base_bdevs_list": [ 00:16:22.357 { 00:16:22.357 "name": null, 00:16:22.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.357 "is_configured": false, 00:16:22.357 "data_offset": 0, 00:16:22.357 "data_size": 63488 00:16:22.357 }, 00:16:22.357 { 00:16:22.357 "name": "BaseBdev2", 00:16:22.357 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:22.357 "is_configured": true, 00:16:22.357 "data_offset": 2048, 00:16:22.357 "data_size": 63488 00:16:22.357 }, 00:16:22.357 { 00:16:22.357 "name": "BaseBdev3", 00:16:22.357 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:22.357 "is_configured": true, 00:16:22.357 "data_offset": 2048, 00:16:22.357 "data_size": 63488 00:16:22.357 }, 00:16:22.357 { 00:16:22.357 "name": "BaseBdev4", 00:16:22.357 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:22.357 "is_configured": true, 00:16:22.357 "data_offset": 2048, 00:16:22.357 "data_size": 63488 00:16:22.357 } 00:16:22.357 ] 00:16:22.357 }' 00:16:22.357 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:22.357 18:57:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.926 18:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:22.926 18:57:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.926 18:57:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.926 [2024-11-16 18:57:06.106370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:22.926 [2024-11-16 18:57:06.106435] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.926 [2024-11-16 18:57:06.106484] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:22.926 [2024-11-16 18:57:06.106496] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.926 [2024-11-16 18:57:06.106986] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.926 [2024-11-16 18:57:06.107018] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:22.926 [2024-11-16 18:57:06.107115] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:22.926 [2024-11-16 18:57:06.107135] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:22.926 [2024-11-16 18:57:06.107146] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:22.926 [2024-11-16 18:57:06.107176] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:22.926 [2024-11-16 18:57:06.120884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:16:22.926 spare 00:16:22.926 18:57:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.926 18:57:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:22.926 [2024-11-16 18:57:06.129263] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.868 "name": "raid_bdev1", 00:16:23.868 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:23.868 "strip_size_kb": 64, 00:16:23.868 "state": "online", 00:16:23.868 "raid_level": "raid5f", 00:16:23.868 "superblock": true, 00:16:23.868 "num_base_bdevs": 4, 00:16:23.868 "num_base_bdevs_discovered": 4, 00:16:23.868 "num_base_bdevs_operational": 4, 00:16:23.868 "process": { 00:16:23.868 "type": "rebuild", 00:16:23.868 "target": "spare", 00:16:23.868 "progress": { 00:16:23.868 "blocks": 19200, 00:16:23.868 "percent": 10 00:16:23.868 } 00:16:23.868 }, 00:16:23.868 "base_bdevs_list": [ 00:16:23.868 { 00:16:23.868 "name": "spare", 00:16:23.868 "uuid": "5756dbf4-9488-57ed-9edf-a423096dacdc", 00:16:23.868 "is_configured": true, 00:16:23.868 "data_offset": 2048, 00:16:23.868 "data_size": 63488 00:16:23.868 }, 00:16:23.868 { 00:16:23.868 "name": "BaseBdev2", 00:16:23.868 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:23.868 "is_configured": true, 00:16:23.868 "data_offset": 2048, 00:16:23.868 "data_size": 63488 00:16:23.868 }, 00:16:23.868 { 00:16:23.868 "name": "BaseBdev3", 00:16:23.868 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:23.868 "is_configured": true, 00:16:23.868 "data_offset": 2048, 00:16:23.868 "data_size": 63488 00:16:23.868 }, 00:16:23.868 { 00:16:23.868 "name": "BaseBdev4", 00:16:23.868 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:23.868 "is_configured": true, 00:16:23.868 "data_offset": 2048, 00:16:23.868 "data_size": 63488 00:16:23.868 } 00:16:23.868 ] 00:16:23.868 }' 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.868 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.868 [2024-11-16 18:57:07.287932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.868 [2024-11-16 18:57:07.334776] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:23.868 [2024-11-16 18:57:07.334824] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.868 [2024-11-16 18:57:07.334841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:23.868 [2024-11-16 18:57:07.334847] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.129 "name": "raid_bdev1", 00:16:24.129 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:24.129 "strip_size_kb": 64, 00:16:24.129 "state": "online", 00:16:24.129 "raid_level": "raid5f", 00:16:24.129 "superblock": true, 00:16:24.129 "num_base_bdevs": 4, 00:16:24.129 "num_base_bdevs_discovered": 3, 00:16:24.129 "num_base_bdevs_operational": 3, 00:16:24.129 "base_bdevs_list": [ 00:16:24.129 { 00:16:24.129 "name": null, 00:16:24.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.129 "is_configured": false, 00:16:24.129 "data_offset": 0, 00:16:24.129 "data_size": 63488 00:16:24.129 }, 00:16:24.129 { 00:16:24.129 "name": "BaseBdev2", 00:16:24.129 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 2048, 00:16:24.129 "data_size": 63488 00:16:24.129 }, 00:16:24.129 { 00:16:24.129 "name": "BaseBdev3", 00:16:24.129 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 2048, 00:16:24.129 "data_size": 63488 00:16:24.129 }, 00:16:24.129 { 00:16:24.129 "name": "BaseBdev4", 00:16:24.129 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:24.129 "is_configured": true, 00:16:24.129 "data_offset": 2048, 00:16:24.129 "data_size": 63488 00:16:24.129 } 00:16:24.129 ] 00:16:24.129 }' 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.129 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:24.389 "name": "raid_bdev1", 00:16:24.389 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:24.389 "strip_size_kb": 64, 00:16:24.389 "state": "online", 00:16:24.389 "raid_level": "raid5f", 00:16:24.389 "superblock": true, 00:16:24.389 "num_base_bdevs": 4, 00:16:24.389 "num_base_bdevs_discovered": 3, 00:16:24.389 "num_base_bdevs_operational": 3, 00:16:24.389 "base_bdevs_list": [ 00:16:24.389 { 00:16:24.389 "name": null, 00:16:24.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:24.389 "is_configured": false, 00:16:24.389 "data_offset": 0, 00:16:24.389 "data_size": 63488 00:16:24.389 }, 00:16:24.389 { 00:16:24.389 "name": "BaseBdev2", 00:16:24.389 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:24.389 "is_configured": true, 00:16:24.389 "data_offset": 2048, 00:16:24.389 "data_size": 63488 00:16:24.389 }, 00:16:24.389 { 00:16:24.389 "name": "BaseBdev3", 00:16:24.389 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:24.389 "is_configured": true, 00:16:24.389 "data_offset": 2048, 00:16:24.389 "data_size": 63488 00:16:24.389 }, 00:16:24.389 { 00:16:24.389 "name": "BaseBdev4", 00:16:24.389 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:24.389 "is_configured": true, 00:16:24.389 "data_offset": 2048, 00:16:24.389 "data_size": 63488 00:16:24.389 } 00:16:24.389 ] 00:16:24.389 }' 00:16:24.389 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.649 [2024-11-16 18:57:07.946420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:24.649 [2024-11-16 18:57:07.946483] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.649 [2024-11-16 18:57:07.946505] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:24.649 [2024-11-16 18:57:07.946514] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.649 [2024-11-16 18:57:07.946957] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.649 [2024-11-16 18:57:07.946975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:24.649 [2024-11-16 18:57:07.947048] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:24.649 [2024-11-16 18:57:07.947064] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:24.649 [2024-11-16 18:57:07.947075] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:24.649 [2024-11-16 18:57:07.947084] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:24.649 BaseBdev1 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.649 18:57:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:25.588 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.588 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.588 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.588 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.588 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.589 18:57:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.589 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.589 "name": "raid_bdev1", 00:16:25.589 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:25.589 "strip_size_kb": 64, 00:16:25.589 "state": "online", 00:16:25.589 "raid_level": "raid5f", 00:16:25.589 "superblock": true, 00:16:25.589 "num_base_bdevs": 4, 00:16:25.589 "num_base_bdevs_discovered": 3, 00:16:25.589 "num_base_bdevs_operational": 3, 00:16:25.589 "base_bdevs_list": [ 00:16:25.589 { 00:16:25.589 "name": null, 00:16:25.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.589 "is_configured": false, 00:16:25.589 "data_offset": 0, 00:16:25.589 "data_size": 63488 00:16:25.589 }, 00:16:25.589 { 00:16:25.589 "name": "BaseBdev2", 00:16:25.589 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:25.589 "is_configured": true, 00:16:25.589 "data_offset": 2048, 00:16:25.589 "data_size": 63488 00:16:25.589 }, 00:16:25.589 { 00:16:25.589 "name": "BaseBdev3", 00:16:25.589 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:25.589 "is_configured": true, 00:16:25.589 "data_offset": 2048, 00:16:25.589 "data_size": 63488 00:16:25.589 }, 00:16:25.589 { 00:16:25.589 "name": "BaseBdev4", 00:16:25.589 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:25.589 "is_configured": true, 00:16:25.589 "data_offset": 2048, 00:16:25.589 "data_size": 63488 00:16:25.589 } 00:16:25.589 ] 00:16:25.589 }' 00:16:25.589 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.589 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.161 "name": "raid_bdev1", 00:16:26.161 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:26.161 "strip_size_kb": 64, 00:16:26.161 "state": "online", 00:16:26.161 "raid_level": "raid5f", 00:16:26.161 "superblock": true, 00:16:26.161 "num_base_bdevs": 4, 00:16:26.161 "num_base_bdevs_discovered": 3, 00:16:26.161 "num_base_bdevs_operational": 3, 00:16:26.161 "base_bdevs_list": [ 00:16:26.161 { 00:16:26.161 "name": null, 00:16:26.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.161 "is_configured": false, 00:16:26.161 "data_offset": 0, 00:16:26.161 "data_size": 63488 00:16:26.161 }, 00:16:26.161 { 00:16:26.161 "name": "BaseBdev2", 00:16:26.161 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:26.161 "is_configured": true, 00:16:26.161 "data_offset": 2048, 00:16:26.161 "data_size": 63488 00:16:26.161 }, 00:16:26.161 { 00:16:26.161 "name": "BaseBdev3", 00:16:26.161 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:26.161 "is_configured": true, 00:16:26.161 "data_offset": 2048, 00:16:26.161 "data_size": 63488 00:16:26.161 }, 00:16:26.161 { 00:16:26.161 "name": "BaseBdev4", 00:16:26.161 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:26.161 "is_configured": true, 00:16:26.161 "data_offset": 2048, 00:16:26.161 "data_size": 63488 00:16:26.161 } 00:16:26.161 ] 00:16:26.161 }' 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.161 [2024-11-16 18:57:09.515801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:26.161 [2024-11-16 18:57:09.515976] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:26.161 [2024-11-16 18:57:09.515993] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:26.161 request: 00:16:26.161 { 00:16:26.161 "base_bdev": "BaseBdev1", 00:16:26.161 "raid_bdev": "raid_bdev1", 00:16:26.161 "method": "bdev_raid_add_base_bdev", 00:16:26.161 "req_id": 1 00:16:26.161 } 00:16:26.161 Got JSON-RPC error response 00:16:26.161 response: 00:16:26.161 { 00:16:26.161 "code": -22, 00:16:26.161 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:26.161 } 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:26.161 18:57:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.101 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.361 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.361 "name": "raid_bdev1", 00:16:27.361 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:27.361 "strip_size_kb": 64, 00:16:27.361 "state": "online", 00:16:27.361 "raid_level": "raid5f", 00:16:27.361 "superblock": true, 00:16:27.361 "num_base_bdevs": 4, 00:16:27.361 "num_base_bdevs_discovered": 3, 00:16:27.361 "num_base_bdevs_operational": 3, 00:16:27.361 "base_bdevs_list": [ 00:16:27.361 { 00:16:27.361 "name": null, 00:16:27.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.361 "is_configured": false, 00:16:27.361 "data_offset": 0, 00:16:27.361 "data_size": 63488 00:16:27.361 }, 00:16:27.361 { 00:16:27.361 "name": "BaseBdev2", 00:16:27.361 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:27.361 "is_configured": true, 00:16:27.361 "data_offset": 2048, 00:16:27.361 "data_size": 63488 00:16:27.361 }, 00:16:27.361 { 00:16:27.361 "name": "BaseBdev3", 00:16:27.361 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:27.361 "is_configured": true, 00:16:27.361 "data_offset": 2048, 00:16:27.361 "data_size": 63488 00:16:27.361 }, 00:16:27.361 { 00:16:27.361 "name": "BaseBdev4", 00:16:27.361 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:27.361 "is_configured": true, 00:16:27.361 "data_offset": 2048, 00:16:27.361 "data_size": 63488 00:16:27.361 } 00:16:27.361 ] 00:16:27.361 }' 00:16:27.361 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.361 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.621 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:27.621 "name": "raid_bdev1", 00:16:27.621 "uuid": "e336d7e7-57e8-48e9-9895-0df01c4f274b", 00:16:27.621 "strip_size_kb": 64, 00:16:27.621 "state": "online", 00:16:27.621 "raid_level": "raid5f", 00:16:27.621 "superblock": true, 00:16:27.621 "num_base_bdevs": 4, 00:16:27.621 "num_base_bdevs_discovered": 3, 00:16:27.621 "num_base_bdevs_operational": 3, 00:16:27.621 "base_bdevs_list": [ 00:16:27.621 { 00:16:27.621 "name": null, 00:16:27.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.621 "is_configured": false, 00:16:27.621 "data_offset": 0, 00:16:27.621 "data_size": 63488 00:16:27.621 }, 00:16:27.621 { 00:16:27.622 "name": "BaseBdev2", 00:16:27.622 "uuid": "920125a0-b762-5688-aa07-57472eca3adf", 00:16:27.622 "is_configured": true, 00:16:27.622 "data_offset": 2048, 00:16:27.622 "data_size": 63488 00:16:27.622 }, 00:16:27.622 { 00:16:27.622 "name": "BaseBdev3", 00:16:27.622 "uuid": "675fa241-283f-50dd-8b9d-12bb4c9911a4", 00:16:27.622 "is_configured": true, 00:16:27.622 "data_offset": 2048, 00:16:27.622 "data_size": 63488 00:16:27.622 }, 00:16:27.622 { 00:16:27.622 "name": "BaseBdev4", 00:16:27.622 "uuid": "3afcb901-5216-561b-98b5-763901013978", 00:16:27.622 "is_configured": true, 00:16:27.622 "data_offset": 2048, 00:16:27.622 "data_size": 63488 00:16:27.622 } 00:16:27.622 ] 00:16:27.622 }' 00:16:27.622 18:57:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:27.622 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:27.622 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:27.622 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:27.622 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84775 00:16:27.622 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84775 ']' 00:16:27.622 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84775 00:16:27.622 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:27.622 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.622 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84775 00:16:27.886 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.886 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.886 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84775' 00:16:27.886 killing process with pid 84775 00:16:27.886 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84775 00:16:27.886 Received shutdown signal, test time was about 60.000000 seconds 00:16:27.886 00:16:27.886 Latency(us) 00:16:27.886 [2024-11-16T18:57:11.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.886 [2024-11-16T18:57:11.359Z] =================================================================================================================== 00:16:27.887 [2024-11-16T18:57:11.359Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:27.887 [2024-11-16 18:57:11.110931] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:27.887 [2024-11-16 18:57:11.111048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.887 [2024-11-16 18:57:11.111122] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.887 [2024-11-16 18:57:11.111139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:27.887 18:57:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84775 00:16:28.159 [2024-11-16 18:57:11.558218] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:29.112 18:57:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:29.112 00:16:29.112 real 0m26.406s 00:16:29.112 user 0m33.134s 00:16:29.112 sys 0m2.810s 00:16:29.112 18:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.112 ************************************ 00:16:29.112 END TEST raid5f_rebuild_test_sb 00:16:29.112 ************************************ 00:16:29.112 18:57:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.372 18:57:12 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:16:29.372 18:57:12 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:29.372 18:57:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:29.372 18:57:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.372 18:57:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:29.372 ************************************ 00:16:29.372 START TEST raid_state_function_test_sb_4k 00:16:29.372 ************************************ 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85575 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:29.372 Process raid pid: 85575 00:16:29.372 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85575' 00:16:29.373 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85575 00:16:29.373 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85575 ']' 00:16:29.373 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.373 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.373 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.373 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.373 18:57:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:29.373 [2024-11-16 18:57:12.745209] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:29.373 [2024-11-16 18:57:12.745322] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.633 [2024-11-16 18:57:12.920746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.633 [2024-11-16 18:57:13.021908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.893 [2024-11-16 18:57:13.212708] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:29.893 [2024-11-16 18:57:13.212748] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.153 [2024-11-16 18:57:13.559828] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.153 [2024-11-16 18:57:13.559875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.153 [2024-11-16 18:57:13.559901] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.153 [2024-11-16 18:57:13.559909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.153 "name": "Existed_Raid", 00:16:30.153 "uuid": "b8644543-6036-4dce-ba24-3a3d72797f8c", 00:16:30.153 "strip_size_kb": 0, 00:16:30.153 "state": "configuring", 00:16:30.153 "raid_level": "raid1", 00:16:30.153 "superblock": true, 00:16:30.153 "num_base_bdevs": 2, 00:16:30.153 "num_base_bdevs_discovered": 0, 00:16:30.153 "num_base_bdevs_operational": 2, 00:16:30.153 "base_bdevs_list": [ 00:16:30.153 { 00:16:30.153 "name": "BaseBdev1", 00:16:30.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.153 "is_configured": false, 00:16:30.153 "data_offset": 0, 00:16:30.153 "data_size": 0 00:16:30.153 }, 00:16:30.153 { 00:16:30.153 "name": "BaseBdev2", 00:16:30.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.153 "is_configured": false, 00:16:30.153 "data_offset": 0, 00:16:30.153 "data_size": 0 00:16:30.153 } 00:16:30.153 ] 00:16:30.153 }' 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.153 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.723 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:30.723 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.723 18:57:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.723 [2024-11-16 18:57:13.999007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:30.723 [2024-11-16 18:57:13.999043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:30.723 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.723 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:30.723 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.723 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.723 [2024-11-16 18:57:14.010987] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:30.723 [2024-11-16 18:57:14.011021] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:30.723 [2024-11-16 18:57:14.011029] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:30.723 [2024-11-16 18:57:14.011040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:30.723 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.723 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:16:30.723 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.723 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.724 [2024-11-16 18:57:14.058026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:30.724 BaseBdev1 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.724 [ 00:16:30.724 { 00:16:30.724 "name": "BaseBdev1", 00:16:30.724 "aliases": [ 00:16:30.724 "b504d34c-7e28-46e6-9127-88c3b6d17462" 00:16:30.724 ], 00:16:30.724 "product_name": "Malloc disk", 00:16:30.724 "block_size": 4096, 00:16:30.724 "num_blocks": 8192, 00:16:30.724 "uuid": "b504d34c-7e28-46e6-9127-88c3b6d17462", 00:16:30.724 "assigned_rate_limits": { 00:16:30.724 "rw_ios_per_sec": 0, 00:16:30.724 "rw_mbytes_per_sec": 0, 00:16:30.724 "r_mbytes_per_sec": 0, 00:16:30.724 "w_mbytes_per_sec": 0 00:16:30.724 }, 00:16:30.724 "claimed": true, 00:16:30.724 "claim_type": "exclusive_write", 00:16:30.724 "zoned": false, 00:16:30.724 "supported_io_types": { 00:16:30.724 "read": true, 00:16:30.724 "write": true, 00:16:30.724 "unmap": true, 00:16:30.724 "flush": true, 00:16:30.724 "reset": true, 00:16:30.724 "nvme_admin": false, 00:16:30.724 "nvme_io": false, 00:16:30.724 "nvme_io_md": false, 00:16:30.724 "write_zeroes": true, 00:16:30.724 "zcopy": true, 00:16:30.724 "get_zone_info": false, 00:16:30.724 "zone_management": false, 00:16:30.724 "zone_append": false, 00:16:30.724 "compare": false, 00:16:30.724 "compare_and_write": false, 00:16:30.724 "abort": true, 00:16:30.724 "seek_hole": false, 00:16:30.724 "seek_data": false, 00:16:30.724 "copy": true, 00:16:30.724 "nvme_iov_md": false 00:16:30.724 }, 00:16:30.724 "memory_domains": [ 00:16:30.724 { 00:16:30.724 "dma_device_id": "system", 00:16:30.724 "dma_device_type": 1 00:16:30.724 }, 00:16:30.724 { 00:16:30.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:30.724 "dma_device_type": 2 00:16:30.724 } 00:16:30.724 ], 00:16:30.724 "driver_specific": {} 00:16:30.724 } 00:16:30.724 ] 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.724 "name": "Existed_Raid", 00:16:30.724 "uuid": "78f1ee7b-1a46-45f4-be37-7f0edc7c87f9", 00:16:30.724 "strip_size_kb": 0, 00:16:30.724 "state": "configuring", 00:16:30.724 "raid_level": "raid1", 00:16:30.724 "superblock": true, 00:16:30.724 "num_base_bdevs": 2, 00:16:30.724 "num_base_bdevs_discovered": 1, 00:16:30.724 "num_base_bdevs_operational": 2, 00:16:30.724 "base_bdevs_list": [ 00:16:30.724 { 00:16:30.724 "name": "BaseBdev1", 00:16:30.724 "uuid": "b504d34c-7e28-46e6-9127-88c3b6d17462", 00:16:30.724 "is_configured": true, 00:16:30.724 "data_offset": 256, 00:16:30.724 "data_size": 7936 00:16:30.724 }, 00:16:30.724 { 00:16:30.724 "name": "BaseBdev2", 00:16:30.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.724 "is_configured": false, 00:16:30.724 "data_offset": 0, 00:16:30.724 "data_size": 0 00:16:30.724 } 00:16:30.724 ] 00:16:30.724 }' 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.724 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 [2024-11-16 18:57:14.581222] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:31.294 [2024-11-16 18:57:14.581290] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 [2024-11-16 18:57:14.589256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:31.294 [2024-11-16 18:57:14.591087] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:31.294 [2024-11-16 18:57:14.591125] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.294 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.294 "name": "Existed_Raid", 00:16:31.294 "uuid": "ee7fd990-6bd8-42d8-bcfa-b7da02bda223", 00:16:31.294 "strip_size_kb": 0, 00:16:31.294 "state": "configuring", 00:16:31.294 "raid_level": "raid1", 00:16:31.294 "superblock": true, 00:16:31.294 "num_base_bdevs": 2, 00:16:31.295 "num_base_bdevs_discovered": 1, 00:16:31.295 "num_base_bdevs_operational": 2, 00:16:31.295 "base_bdevs_list": [ 00:16:31.295 { 00:16:31.295 "name": "BaseBdev1", 00:16:31.295 "uuid": "b504d34c-7e28-46e6-9127-88c3b6d17462", 00:16:31.295 "is_configured": true, 00:16:31.295 "data_offset": 256, 00:16:31.295 "data_size": 7936 00:16:31.295 }, 00:16:31.295 { 00:16:31.295 "name": "BaseBdev2", 00:16:31.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.295 "is_configured": false, 00:16:31.295 "data_offset": 0, 00:16:31.295 "data_size": 0 00:16:31.295 } 00:16:31.295 ] 00:16:31.295 }' 00:16:31.295 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.295 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.554 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:16:31.554 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.554 18:57:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.814 [2024-11-16 18:57:15.026214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:31.814 [2024-11-16 18:57:15.026465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:31.814 [2024-11-16 18:57:15.026479] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:31.814 [2024-11-16 18:57:15.026769] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:31.814 [2024-11-16 18:57:15.026932] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:31.814 [2024-11-16 18:57:15.026952] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:31.814 BaseBdev2 00:16:31.814 [2024-11-16 18:57:15.027085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.815 [ 00:16:31.815 { 00:16:31.815 "name": "BaseBdev2", 00:16:31.815 "aliases": [ 00:16:31.815 "111fb27b-c85d-479f-857b-be06d2bcd3e1" 00:16:31.815 ], 00:16:31.815 "product_name": "Malloc disk", 00:16:31.815 "block_size": 4096, 00:16:31.815 "num_blocks": 8192, 00:16:31.815 "uuid": "111fb27b-c85d-479f-857b-be06d2bcd3e1", 00:16:31.815 "assigned_rate_limits": { 00:16:31.815 "rw_ios_per_sec": 0, 00:16:31.815 "rw_mbytes_per_sec": 0, 00:16:31.815 "r_mbytes_per_sec": 0, 00:16:31.815 "w_mbytes_per_sec": 0 00:16:31.815 }, 00:16:31.815 "claimed": true, 00:16:31.815 "claim_type": "exclusive_write", 00:16:31.815 "zoned": false, 00:16:31.815 "supported_io_types": { 00:16:31.815 "read": true, 00:16:31.815 "write": true, 00:16:31.815 "unmap": true, 00:16:31.815 "flush": true, 00:16:31.815 "reset": true, 00:16:31.815 "nvme_admin": false, 00:16:31.815 "nvme_io": false, 00:16:31.815 "nvme_io_md": false, 00:16:31.815 "write_zeroes": true, 00:16:31.815 "zcopy": true, 00:16:31.815 "get_zone_info": false, 00:16:31.815 "zone_management": false, 00:16:31.815 "zone_append": false, 00:16:31.815 "compare": false, 00:16:31.815 "compare_and_write": false, 00:16:31.815 "abort": true, 00:16:31.815 "seek_hole": false, 00:16:31.815 "seek_data": false, 00:16:31.815 "copy": true, 00:16:31.815 "nvme_iov_md": false 00:16:31.815 }, 00:16:31.815 "memory_domains": [ 00:16:31.815 { 00:16:31.815 "dma_device_id": "system", 00:16:31.815 "dma_device_type": 1 00:16:31.815 }, 00:16:31.815 { 00:16:31.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:31.815 "dma_device_type": 2 00:16:31.815 } 00:16:31.815 ], 00:16:31.815 "driver_specific": {} 00:16:31.815 } 00:16:31.815 ] 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:31.815 "name": "Existed_Raid", 00:16:31.815 "uuid": "ee7fd990-6bd8-42d8-bcfa-b7da02bda223", 00:16:31.815 "strip_size_kb": 0, 00:16:31.815 "state": "online", 00:16:31.815 "raid_level": "raid1", 00:16:31.815 "superblock": true, 00:16:31.815 "num_base_bdevs": 2, 00:16:31.815 "num_base_bdevs_discovered": 2, 00:16:31.815 "num_base_bdevs_operational": 2, 00:16:31.815 "base_bdevs_list": [ 00:16:31.815 { 00:16:31.815 "name": "BaseBdev1", 00:16:31.815 "uuid": "b504d34c-7e28-46e6-9127-88c3b6d17462", 00:16:31.815 "is_configured": true, 00:16:31.815 "data_offset": 256, 00:16:31.815 "data_size": 7936 00:16:31.815 }, 00:16:31.815 { 00:16:31.815 "name": "BaseBdev2", 00:16:31.815 "uuid": "111fb27b-c85d-479f-857b-be06d2bcd3e1", 00:16:31.815 "is_configured": true, 00:16:31.815 "data_offset": 256, 00:16:31.815 "data_size": 7936 00:16:31.815 } 00:16:31.815 ] 00:16:31.815 }' 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:31.815 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.075 [2024-11-16 18:57:15.437779] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.075 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:32.075 "name": "Existed_Raid", 00:16:32.075 "aliases": [ 00:16:32.075 "ee7fd990-6bd8-42d8-bcfa-b7da02bda223" 00:16:32.075 ], 00:16:32.075 "product_name": "Raid Volume", 00:16:32.075 "block_size": 4096, 00:16:32.075 "num_blocks": 7936, 00:16:32.075 "uuid": "ee7fd990-6bd8-42d8-bcfa-b7da02bda223", 00:16:32.075 "assigned_rate_limits": { 00:16:32.075 "rw_ios_per_sec": 0, 00:16:32.075 "rw_mbytes_per_sec": 0, 00:16:32.075 "r_mbytes_per_sec": 0, 00:16:32.075 "w_mbytes_per_sec": 0 00:16:32.075 }, 00:16:32.075 "claimed": false, 00:16:32.075 "zoned": false, 00:16:32.075 "supported_io_types": { 00:16:32.075 "read": true, 00:16:32.075 "write": true, 00:16:32.075 "unmap": false, 00:16:32.075 "flush": false, 00:16:32.075 "reset": true, 00:16:32.075 "nvme_admin": false, 00:16:32.075 "nvme_io": false, 00:16:32.075 "nvme_io_md": false, 00:16:32.075 "write_zeroes": true, 00:16:32.075 "zcopy": false, 00:16:32.075 "get_zone_info": false, 00:16:32.075 "zone_management": false, 00:16:32.075 "zone_append": false, 00:16:32.075 "compare": false, 00:16:32.075 "compare_and_write": false, 00:16:32.075 "abort": false, 00:16:32.075 "seek_hole": false, 00:16:32.075 "seek_data": false, 00:16:32.075 "copy": false, 00:16:32.075 "nvme_iov_md": false 00:16:32.075 }, 00:16:32.075 "memory_domains": [ 00:16:32.075 { 00:16:32.075 "dma_device_id": "system", 00:16:32.075 "dma_device_type": 1 00:16:32.075 }, 00:16:32.075 { 00:16:32.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.075 "dma_device_type": 2 00:16:32.075 }, 00:16:32.075 { 00:16:32.075 "dma_device_id": "system", 00:16:32.075 "dma_device_type": 1 00:16:32.075 }, 00:16:32.075 { 00:16:32.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:32.076 "dma_device_type": 2 00:16:32.076 } 00:16:32.076 ], 00:16:32.076 "driver_specific": { 00:16:32.076 "raid": { 00:16:32.076 "uuid": "ee7fd990-6bd8-42d8-bcfa-b7da02bda223", 00:16:32.076 "strip_size_kb": 0, 00:16:32.076 "state": "online", 00:16:32.076 "raid_level": "raid1", 00:16:32.076 "superblock": true, 00:16:32.076 "num_base_bdevs": 2, 00:16:32.076 "num_base_bdevs_discovered": 2, 00:16:32.076 "num_base_bdevs_operational": 2, 00:16:32.076 "base_bdevs_list": [ 00:16:32.076 { 00:16:32.076 "name": "BaseBdev1", 00:16:32.076 "uuid": "b504d34c-7e28-46e6-9127-88c3b6d17462", 00:16:32.076 "is_configured": true, 00:16:32.076 "data_offset": 256, 00:16:32.076 "data_size": 7936 00:16:32.076 }, 00:16:32.076 { 00:16:32.076 "name": "BaseBdev2", 00:16:32.076 "uuid": "111fb27b-c85d-479f-857b-be06d2bcd3e1", 00:16:32.076 "is_configured": true, 00:16:32.076 "data_offset": 256, 00:16:32.076 "data_size": 7936 00:16:32.076 } 00:16:32.076 ] 00:16:32.076 } 00:16:32.076 } 00:16:32.076 }' 00:16:32.076 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:32.076 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:32.076 BaseBdev2' 00:16:32.076 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.335 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.336 [2024-11-16 18:57:15.637223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.336 "name": "Existed_Raid", 00:16:32.336 "uuid": "ee7fd990-6bd8-42d8-bcfa-b7da02bda223", 00:16:32.336 "strip_size_kb": 0, 00:16:32.336 "state": "online", 00:16:32.336 "raid_level": "raid1", 00:16:32.336 "superblock": true, 00:16:32.336 "num_base_bdevs": 2, 00:16:32.336 "num_base_bdevs_discovered": 1, 00:16:32.336 "num_base_bdevs_operational": 1, 00:16:32.336 "base_bdevs_list": [ 00:16:32.336 { 00:16:32.336 "name": null, 00:16:32.336 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.336 "is_configured": false, 00:16:32.336 "data_offset": 0, 00:16:32.336 "data_size": 7936 00:16:32.336 }, 00:16:32.336 { 00:16:32.336 "name": "BaseBdev2", 00:16:32.336 "uuid": "111fb27b-c85d-479f-857b-be06d2bcd3e1", 00:16:32.336 "is_configured": true, 00:16:32.336 "data_offset": 256, 00:16:32.336 "data_size": 7936 00:16:32.336 } 00:16:32.336 ] 00:16:32.336 }' 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.336 18:57:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.906 [2024-11-16 18:57:16.174214] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:32.906 [2024-11-16 18:57:16.174318] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:32.906 [2024-11-16 18:57:16.265464] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:32.906 [2024-11-16 18:57:16.265518] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:32.906 [2024-11-16 18:57:16.265529] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85575 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85575 ']' 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85575 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85575 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.906 killing process with pid 85575 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85575' 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85575 00:16:32.906 [2024-11-16 18:57:16.352133] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:32.906 18:57:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85575 00:16:32.906 [2024-11-16 18:57:16.368090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:34.288 18:57:17 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:16:34.288 00:16:34.288 real 0m4.744s 00:16:34.288 user 0m6.856s 00:16:34.288 sys 0m0.794s 00:16:34.288 18:57:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:34.288 18:57:17 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.288 ************************************ 00:16:34.288 END TEST raid_state_function_test_sb_4k 00:16:34.288 ************************************ 00:16:34.288 18:57:17 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:34.288 18:57:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:34.288 18:57:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:34.288 18:57:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:34.288 ************************************ 00:16:34.288 START TEST raid_superblock_test_4k 00:16:34.288 ************************************ 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85827 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85827 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85827 ']' 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.288 18:57:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:34.288 [2024-11-16 18:57:17.550028] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:34.288 [2024-11-16 18:57:17.550150] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85827 ] 00:16:34.288 [2024-11-16 18:57:17.711079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.548 [2024-11-16 18:57:17.819691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.548 [2024-11-16 18:57:18.014870] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.548 [2024-11-16 18:57:18.014935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.120 malloc1 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.120 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.120 [2024-11-16 18:57:18.413416] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:35.120 [2024-11-16 18:57:18.413475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.120 [2024-11-16 18:57:18.413497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:35.120 [2024-11-16 18:57:18.413506] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.120 [2024-11-16 18:57:18.415584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.121 [2024-11-16 18:57:18.415623] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:35.121 pt1 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.121 malloc2 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.121 [2024-11-16 18:57:18.466479] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.121 [2024-11-16 18:57:18.466542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.121 [2024-11-16 18:57:18.466561] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:35.121 [2024-11-16 18:57:18.466569] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.121 [2024-11-16 18:57:18.468545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.121 [2024-11-16 18:57:18.468578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.121 pt2 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.121 [2024-11-16 18:57:18.478518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:35.121 [2024-11-16 18:57:18.480295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.121 [2024-11-16 18:57:18.480475] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:35.121 [2024-11-16 18:57:18.480493] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:35.121 [2024-11-16 18:57:18.480725] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:16:35.121 [2024-11-16 18:57:18.480886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:35.121 [2024-11-16 18:57:18.480908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:35.121 [2024-11-16 18:57:18.481038] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.121 "name": "raid_bdev1", 00:16:35.121 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:35.121 "strip_size_kb": 0, 00:16:35.121 "state": "online", 00:16:35.121 "raid_level": "raid1", 00:16:35.121 "superblock": true, 00:16:35.121 "num_base_bdevs": 2, 00:16:35.121 "num_base_bdevs_discovered": 2, 00:16:35.121 "num_base_bdevs_operational": 2, 00:16:35.121 "base_bdevs_list": [ 00:16:35.121 { 00:16:35.121 "name": "pt1", 00:16:35.121 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.121 "is_configured": true, 00:16:35.121 "data_offset": 256, 00:16:35.121 "data_size": 7936 00:16:35.121 }, 00:16:35.121 { 00:16:35.121 "name": "pt2", 00:16:35.121 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.121 "is_configured": true, 00:16:35.121 "data_offset": 256, 00:16:35.121 "data_size": 7936 00:16:35.121 } 00:16:35.121 ] 00:16:35.121 }' 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.121 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:35.690 [2024-11-16 18:57:18.866040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:35.690 "name": "raid_bdev1", 00:16:35.690 "aliases": [ 00:16:35.690 "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a" 00:16:35.690 ], 00:16:35.690 "product_name": "Raid Volume", 00:16:35.690 "block_size": 4096, 00:16:35.690 "num_blocks": 7936, 00:16:35.690 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:35.690 "assigned_rate_limits": { 00:16:35.690 "rw_ios_per_sec": 0, 00:16:35.690 "rw_mbytes_per_sec": 0, 00:16:35.690 "r_mbytes_per_sec": 0, 00:16:35.690 "w_mbytes_per_sec": 0 00:16:35.690 }, 00:16:35.690 "claimed": false, 00:16:35.690 "zoned": false, 00:16:35.690 "supported_io_types": { 00:16:35.690 "read": true, 00:16:35.690 "write": true, 00:16:35.690 "unmap": false, 00:16:35.690 "flush": false, 00:16:35.690 "reset": true, 00:16:35.690 "nvme_admin": false, 00:16:35.690 "nvme_io": false, 00:16:35.690 "nvme_io_md": false, 00:16:35.690 "write_zeroes": true, 00:16:35.690 "zcopy": false, 00:16:35.690 "get_zone_info": false, 00:16:35.690 "zone_management": false, 00:16:35.690 "zone_append": false, 00:16:35.690 "compare": false, 00:16:35.690 "compare_and_write": false, 00:16:35.690 "abort": false, 00:16:35.690 "seek_hole": false, 00:16:35.690 "seek_data": false, 00:16:35.690 "copy": false, 00:16:35.690 "nvme_iov_md": false 00:16:35.690 }, 00:16:35.690 "memory_domains": [ 00:16:35.690 { 00:16:35.690 "dma_device_id": "system", 00:16:35.690 "dma_device_type": 1 00:16:35.690 }, 00:16:35.690 { 00:16:35.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.690 "dma_device_type": 2 00:16:35.690 }, 00:16:35.690 { 00:16:35.690 "dma_device_id": "system", 00:16:35.690 "dma_device_type": 1 00:16:35.690 }, 00:16:35.690 { 00:16:35.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:35.690 "dma_device_type": 2 00:16:35.690 } 00:16:35.690 ], 00:16:35.690 "driver_specific": { 00:16:35.690 "raid": { 00:16:35.690 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:35.690 "strip_size_kb": 0, 00:16:35.690 "state": "online", 00:16:35.690 "raid_level": "raid1", 00:16:35.690 "superblock": true, 00:16:35.690 "num_base_bdevs": 2, 00:16:35.690 "num_base_bdevs_discovered": 2, 00:16:35.690 "num_base_bdevs_operational": 2, 00:16:35.690 "base_bdevs_list": [ 00:16:35.690 { 00:16:35.690 "name": "pt1", 00:16:35.690 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.690 "is_configured": true, 00:16:35.690 "data_offset": 256, 00:16:35.690 "data_size": 7936 00:16:35.690 }, 00:16:35.690 { 00:16:35.690 "name": "pt2", 00:16:35.690 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.690 "is_configured": true, 00:16:35.690 "data_offset": 256, 00:16:35.690 "data_size": 7936 00:16:35.690 } 00:16:35.690 ] 00:16:35.690 } 00:16:35.690 } 00:16:35.690 }' 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:35.690 pt2' 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.690 18:57:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.690 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.690 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 [2024-11-16 18:57:19.093619] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e34a74c8-024d-4a6f-baa0-831c1d8e5a6a 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z e34a74c8-024d-4a6f-baa0-831c1d8e5a6a ']' 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.691 [2024-11-16 18:57:19.141324] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.691 [2024-11-16 18:57:19.141347] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:35.691 [2024-11-16 18:57:19.141420] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:35.691 [2024-11-16 18:57:19.141472] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:35.691 [2024-11-16 18:57:19.141483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:35.691 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.950 [2024-11-16 18:57:19.265113] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:35.950 [2024-11-16 18:57:19.266848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:35.950 [2024-11-16 18:57:19.266910] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:35.950 [2024-11-16 18:57:19.266958] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:35.950 [2024-11-16 18:57:19.266972] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:35.950 [2024-11-16 18:57:19.266981] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:35.950 request: 00:16:35.950 { 00:16:35.950 "name": "raid_bdev1", 00:16:35.950 "raid_level": "raid1", 00:16:35.950 "base_bdevs": [ 00:16:35.950 "malloc1", 00:16:35.950 "malloc2" 00:16:35.950 ], 00:16:35.950 "superblock": false, 00:16:35.950 "method": "bdev_raid_create", 00:16:35.950 "req_id": 1 00:16:35.950 } 00:16:35.950 Got JSON-RPC error response 00:16:35.950 response: 00:16:35.950 { 00:16:35.950 "code": -17, 00:16:35.950 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:35.950 } 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.950 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.950 [2024-11-16 18:57:19.328989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:35.950 [2024-11-16 18:57:19.329077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.951 [2024-11-16 18:57:19.329109] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:35.951 [2024-11-16 18:57:19.329169] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.951 [2024-11-16 18:57:19.331183] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.951 [2024-11-16 18:57:19.331253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:35.951 [2024-11-16 18:57:19.331337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:35.951 [2024-11-16 18:57:19.331411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:35.951 pt1 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.951 "name": "raid_bdev1", 00:16:35.951 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:35.951 "strip_size_kb": 0, 00:16:35.951 "state": "configuring", 00:16:35.951 "raid_level": "raid1", 00:16:35.951 "superblock": true, 00:16:35.951 "num_base_bdevs": 2, 00:16:35.951 "num_base_bdevs_discovered": 1, 00:16:35.951 "num_base_bdevs_operational": 2, 00:16:35.951 "base_bdevs_list": [ 00:16:35.951 { 00:16:35.951 "name": "pt1", 00:16:35.951 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.951 "is_configured": true, 00:16:35.951 "data_offset": 256, 00:16:35.951 "data_size": 7936 00:16:35.951 }, 00:16:35.951 { 00:16:35.951 "name": null, 00:16:35.951 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.951 "is_configured": false, 00:16:35.951 "data_offset": 256, 00:16:35.951 "data_size": 7936 00:16:35.951 } 00:16:35.951 ] 00:16:35.951 }' 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.951 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.210 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:16:36.210 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:36.210 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:36.210 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:36.210 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.210 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.469 [2024-11-16 18:57:19.684449] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:36.469 [2024-11-16 18:57:19.684514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.469 [2024-11-16 18:57:19.684533] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:36.469 [2024-11-16 18:57:19.684544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.469 [2024-11-16 18:57:19.684970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.469 [2024-11-16 18:57:19.685001] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:36.469 [2024-11-16 18:57:19.685075] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:36.469 [2024-11-16 18:57:19.685097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:36.469 [2024-11-16 18:57:19.685209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:36.469 [2024-11-16 18:57:19.685220] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:36.469 [2024-11-16 18:57:19.685438] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:36.469 [2024-11-16 18:57:19.685580] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:36.469 [2024-11-16 18:57:19.685589] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:36.469 [2024-11-16 18:57:19.685743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.469 pt2 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.469 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.469 "name": "raid_bdev1", 00:16:36.469 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:36.470 "strip_size_kb": 0, 00:16:36.470 "state": "online", 00:16:36.470 "raid_level": "raid1", 00:16:36.470 "superblock": true, 00:16:36.470 "num_base_bdevs": 2, 00:16:36.470 "num_base_bdevs_discovered": 2, 00:16:36.470 "num_base_bdevs_operational": 2, 00:16:36.470 "base_bdevs_list": [ 00:16:36.470 { 00:16:36.470 "name": "pt1", 00:16:36.470 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.470 "is_configured": true, 00:16:36.470 "data_offset": 256, 00:16:36.470 "data_size": 7936 00:16:36.470 }, 00:16:36.470 { 00:16:36.470 "name": "pt2", 00:16:36.470 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.470 "is_configured": true, 00:16:36.470 "data_offset": 256, 00:16:36.470 "data_size": 7936 00:16:36.470 } 00:16:36.470 ] 00:16:36.470 }' 00:16:36.470 18:57:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.470 18:57:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.729 [2024-11-16 18:57:20.147861] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.729 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:36.729 "name": "raid_bdev1", 00:16:36.729 "aliases": [ 00:16:36.729 "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a" 00:16:36.729 ], 00:16:36.729 "product_name": "Raid Volume", 00:16:36.729 "block_size": 4096, 00:16:36.729 "num_blocks": 7936, 00:16:36.729 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:36.729 "assigned_rate_limits": { 00:16:36.729 "rw_ios_per_sec": 0, 00:16:36.729 "rw_mbytes_per_sec": 0, 00:16:36.729 "r_mbytes_per_sec": 0, 00:16:36.729 "w_mbytes_per_sec": 0 00:16:36.729 }, 00:16:36.729 "claimed": false, 00:16:36.729 "zoned": false, 00:16:36.729 "supported_io_types": { 00:16:36.729 "read": true, 00:16:36.729 "write": true, 00:16:36.729 "unmap": false, 00:16:36.729 "flush": false, 00:16:36.729 "reset": true, 00:16:36.729 "nvme_admin": false, 00:16:36.729 "nvme_io": false, 00:16:36.729 "nvme_io_md": false, 00:16:36.729 "write_zeroes": true, 00:16:36.729 "zcopy": false, 00:16:36.729 "get_zone_info": false, 00:16:36.729 "zone_management": false, 00:16:36.729 "zone_append": false, 00:16:36.729 "compare": false, 00:16:36.729 "compare_and_write": false, 00:16:36.729 "abort": false, 00:16:36.729 "seek_hole": false, 00:16:36.729 "seek_data": false, 00:16:36.729 "copy": false, 00:16:36.729 "nvme_iov_md": false 00:16:36.729 }, 00:16:36.729 "memory_domains": [ 00:16:36.729 { 00:16:36.730 "dma_device_id": "system", 00:16:36.730 "dma_device_type": 1 00:16:36.730 }, 00:16:36.730 { 00:16:36.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.730 "dma_device_type": 2 00:16:36.730 }, 00:16:36.730 { 00:16:36.730 "dma_device_id": "system", 00:16:36.730 "dma_device_type": 1 00:16:36.730 }, 00:16:36.730 { 00:16:36.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.730 "dma_device_type": 2 00:16:36.730 } 00:16:36.730 ], 00:16:36.730 "driver_specific": { 00:16:36.730 "raid": { 00:16:36.730 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:36.730 "strip_size_kb": 0, 00:16:36.730 "state": "online", 00:16:36.730 "raid_level": "raid1", 00:16:36.730 "superblock": true, 00:16:36.730 "num_base_bdevs": 2, 00:16:36.730 "num_base_bdevs_discovered": 2, 00:16:36.730 "num_base_bdevs_operational": 2, 00:16:36.730 "base_bdevs_list": [ 00:16:36.730 { 00:16:36.730 "name": "pt1", 00:16:36.730 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.730 "is_configured": true, 00:16:36.730 "data_offset": 256, 00:16:36.730 "data_size": 7936 00:16:36.730 }, 00:16:36.730 { 00:16:36.730 "name": "pt2", 00:16:36.730 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.730 "is_configured": true, 00:16:36.730 "data_offset": 256, 00:16:36.730 "data_size": 7936 00:16:36.730 } 00:16:36.730 ] 00:16:36.730 } 00:16:36.730 } 00:16:36.730 }' 00:16:36.730 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:36.989 pt2' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:36.989 [2024-11-16 18:57:20.355468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' e34a74c8-024d-4a6f-baa0-831c1d8e5a6a '!=' e34a74c8-024d-4a6f-baa0-831c1d8e5a6a ']' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.989 [2024-11-16 18:57:20.403202] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.989 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.989 "name": "raid_bdev1", 00:16:36.989 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:36.989 "strip_size_kb": 0, 00:16:36.989 "state": "online", 00:16:36.989 "raid_level": "raid1", 00:16:36.989 "superblock": true, 00:16:36.989 "num_base_bdevs": 2, 00:16:36.989 "num_base_bdevs_discovered": 1, 00:16:36.989 "num_base_bdevs_operational": 1, 00:16:36.989 "base_bdevs_list": [ 00:16:36.989 { 00:16:36.990 "name": null, 00:16:36.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.990 "is_configured": false, 00:16:36.990 "data_offset": 0, 00:16:36.990 "data_size": 7936 00:16:36.990 }, 00:16:36.990 { 00:16:36.990 "name": "pt2", 00:16:36.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.990 "is_configured": true, 00:16:36.990 "data_offset": 256, 00:16:36.990 "data_size": 7936 00:16:36.990 } 00:16:36.990 ] 00:16:36.990 }' 00:16:36.990 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.990 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.560 [2024-11-16 18:57:20.838444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.560 [2024-11-16 18:57:20.838512] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.560 [2024-11-16 18:57:20.838615] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.560 [2024-11-16 18:57:20.838685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.560 [2024-11-16 18:57:20.838768] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.560 [2024-11-16 18:57:20.914303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:37.560 [2024-11-16 18:57:20.914361] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.560 [2024-11-16 18:57:20.914378] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:16:37.560 [2024-11-16 18:57:20.914388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.560 [2024-11-16 18:57:20.916517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.560 [2024-11-16 18:57:20.916558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:37.560 [2024-11-16 18:57:20.916632] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:37.560 [2024-11-16 18:57:20.916696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:37.560 [2024-11-16 18:57:20.916794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:37.560 [2024-11-16 18:57:20.916806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:37.560 [2024-11-16 18:57:20.917039] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:37.560 [2024-11-16 18:57:20.917200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:37.560 [2024-11-16 18:57:20.917210] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:37.560 [2024-11-16 18:57:20.917348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.560 pt2 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:37.560 "name": "raid_bdev1", 00:16:37.560 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:37.560 "strip_size_kb": 0, 00:16:37.560 "state": "online", 00:16:37.560 "raid_level": "raid1", 00:16:37.560 "superblock": true, 00:16:37.560 "num_base_bdevs": 2, 00:16:37.560 "num_base_bdevs_discovered": 1, 00:16:37.560 "num_base_bdevs_operational": 1, 00:16:37.560 "base_bdevs_list": [ 00:16:37.560 { 00:16:37.560 "name": null, 00:16:37.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.560 "is_configured": false, 00:16:37.560 "data_offset": 256, 00:16:37.560 "data_size": 7936 00:16:37.560 }, 00:16:37.560 { 00:16:37.560 "name": "pt2", 00:16:37.560 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:37.560 "is_configured": true, 00:16:37.560 "data_offset": 256, 00:16:37.560 "data_size": 7936 00:16:37.560 } 00:16:37.560 ] 00:16:37.560 }' 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:37.560 18:57:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.131 [2024-11-16 18:57:21.317566] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.131 [2024-11-16 18:57:21.317632] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.131 [2024-11-16 18:57:21.317732] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.131 [2024-11-16 18:57:21.317794] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.131 [2024-11-16 18:57:21.317848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.131 [2024-11-16 18:57:21.361508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:38.131 [2024-11-16 18:57:21.361592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.131 [2024-11-16 18:57:21.361624] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:16:38.131 [2024-11-16 18:57:21.361666] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.131 [2024-11-16 18:57:21.363737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.131 [2024-11-16 18:57:21.363824] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:38.131 [2024-11-16 18:57:21.363917] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:38.131 [2024-11-16 18:57:21.363976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:38.131 [2024-11-16 18:57:21.364136] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:38.131 [2024-11-16 18:57:21.364186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.131 [2024-11-16 18:57:21.364224] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:38.131 [2024-11-16 18:57:21.364318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:38.131 [2024-11-16 18:57:21.364419] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:38.131 [2024-11-16 18:57:21.364454] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:38.131 [2024-11-16 18:57:21.364706] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:38.131 [2024-11-16 18:57:21.364879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:38.131 [2024-11-16 18:57:21.364921] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:38.131 [2024-11-16 18:57:21.365073] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.131 pt1 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.131 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.131 "name": "raid_bdev1", 00:16:38.131 "uuid": "e34a74c8-024d-4a6f-baa0-831c1d8e5a6a", 00:16:38.132 "strip_size_kb": 0, 00:16:38.132 "state": "online", 00:16:38.132 "raid_level": "raid1", 00:16:38.132 "superblock": true, 00:16:38.132 "num_base_bdevs": 2, 00:16:38.132 "num_base_bdevs_discovered": 1, 00:16:38.132 "num_base_bdevs_operational": 1, 00:16:38.132 "base_bdevs_list": [ 00:16:38.132 { 00:16:38.132 "name": null, 00:16:38.132 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.132 "is_configured": false, 00:16:38.132 "data_offset": 256, 00:16:38.132 "data_size": 7936 00:16:38.132 }, 00:16:38.132 { 00:16:38.132 "name": "pt2", 00:16:38.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.132 "is_configured": true, 00:16:38.132 "data_offset": 256, 00:16:38.132 "data_size": 7936 00:16:38.132 } 00:16:38.132 ] 00:16:38.132 }' 00:16:38.132 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.132 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.392 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:38.392 [2024-11-16 18:57:21.848887] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' e34a74c8-024d-4a6f-baa0-831c1d8e5a6a '!=' e34a74c8-024d-4a6f-baa0-831c1d8e5a6a ']' 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85827 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85827 ']' 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85827 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85827 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:38.652 killing process with pid 85827 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85827' 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85827 00:16:38.652 [2024-11-16 18:57:21.911166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:38.652 [2024-11-16 18:57:21.911246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.652 [2024-11-16 18:57:21.911287] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.652 [2024-11-16 18:57:21.911301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:38.652 18:57:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85827 00:16:38.652 [2024-11-16 18:57:22.106170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.062 ************************************ 00:16:40.062 END TEST raid_superblock_test_4k 00:16:40.062 ************************************ 00:16:40.062 18:57:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:16:40.062 00:16:40.062 real 0m5.672s 00:16:40.062 user 0m8.529s 00:16:40.062 sys 0m1.059s 00:16:40.062 18:57:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:40.062 18:57:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 18:57:23 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:16:40.062 18:57:23 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:16:40.062 18:57:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:40.062 18:57:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:40.062 18:57:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 ************************************ 00:16:40.062 START TEST raid_rebuild_test_sb_4k 00:16:40.062 ************************************ 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86150 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86150 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86150 ']' 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:40.062 18:57:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.062 [2024-11-16 18:57:23.304366] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:40.062 [2024-11-16 18:57:23.304569] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:40.062 Zero copy mechanism will not be used. 00:16:40.062 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86150 ] 00:16:40.062 [2024-11-16 18:57:23.475619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.322 [2024-11-16 18:57:23.576084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.322 [2024-11-16 18:57:23.763446] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.322 [2024-11-16 18:57:23.763563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.892 BaseBdev1_malloc 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.892 [2024-11-16 18:57:24.196827] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:40.892 [2024-11-16 18:57:24.196894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.892 [2024-11-16 18:57:24.196918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:40.892 [2024-11-16 18:57:24.196930] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.892 [2024-11-16 18:57:24.198959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.892 [2024-11-16 18:57:24.198995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:40.892 BaseBdev1 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.892 BaseBdev2_malloc 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.892 [2024-11-16 18:57:24.250640] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:40.892 [2024-11-16 18:57:24.250724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.892 [2024-11-16 18:57:24.250744] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:40.892 [2024-11-16 18:57:24.250754] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.892 [2024-11-16 18:57:24.252727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.892 [2024-11-16 18:57:24.252765] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:40.892 BaseBdev2 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.892 spare_malloc 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.892 spare_delay 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:40.892 [2024-11-16 18:57:24.354137] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:40.892 [2024-11-16 18:57:24.354191] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.892 [2024-11-16 18:57:24.354225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:16:40.892 [2024-11-16 18:57:24.354235] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.892 [2024-11-16 18:57:24.356473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.892 [2024-11-16 18:57:24.356514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:40.892 spare 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.892 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.152 [2024-11-16 18:57:24.366172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:41.152 [2024-11-16 18:57:24.367940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:41.152 [2024-11-16 18:57:24.368132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:41.152 [2024-11-16 18:57:24.368149] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:41.152 [2024-11-16 18:57:24.368368] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:41.152 [2024-11-16 18:57:24.368519] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:41.152 [2024-11-16 18:57:24.368527] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:41.153 [2024-11-16 18:57:24.368680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.153 "name": "raid_bdev1", 00:16:41.153 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:41.153 "strip_size_kb": 0, 00:16:41.153 "state": "online", 00:16:41.153 "raid_level": "raid1", 00:16:41.153 "superblock": true, 00:16:41.153 "num_base_bdevs": 2, 00:16:41.153 "num_base_bdevs_discovered": 2, 00:16:41.153 "num_base_bdevs_operational": 2, 00:16:41.153 "base_bdevs_list": [ 00:16:41.153 { 00:16:41.153 "name": "BaseBdev1", 00:16:41.153 "uuid": "c72b8bb4-3f3e-5b63-8fe9-8eb15d5826f0", 00:16:41.153 "is_configured": true, 00:16:41.153 "data_offset": 256, 00:16:41.153 "data_size": 7936 00:16:41.153 }, 00:16:41.153 { 00:16:41.153 "name": "BaseBdev2", 00:16:41.153 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:41.153 "is_configured": true, 00:16:41.153 "data_offset": 256, 00:16:41.153 "data_size": 7936 00:16:41.153 } 00:16:41.153 ] 00:16:41.153 }' 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.153 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.413 [2024-11-16 18:57:24.785676] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:41.413 18:57:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:41.674 [2024-11-16 18:57:25.041007] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:41.674 /dev/nbd0 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:41.674 1+0 records in 00:16:41.674 1+0 records out 00:16:41.674 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348383 s, 11.8 MB/s 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:16:41.674 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:16:42.244 7936+0 records in 00:16:42.244 7936+0 records out 00:16:42.244 32505856 bytes (33 MB, 31 MiB) copied, 0.545742 s, 59.6 MB/s 00:16:42.244 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:42.244 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.244 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:42.244 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:42.244 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:42.244 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:42.244 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:42.505 [2024-11-16 18:57:25.861092] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.505 [2024-11-16 18:57:25.889106] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.505 "name": "raid_bdev1", 00:16:42.505 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:42.505 "strip_size_kb": 0, 00:16:42.505 "state": "online", 00:16:42.505 "raid_level": "raid1", 00:16:42.505 "superblock": true, 00:16:42.505 "num_base_bdevs": 2, 00:16:42.505 "num_base_bdevs_discovered": 1, 00:16:42.505 "num_base_bdevs_operational": 1, 00:16:42.505 "base_bdevs_list": [ 00:16:42.505 { 00:16:42.505 "name": null, 00:16:42.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.505 "is_configured": false, 00:16:42.505 "data_offset": 0, 00:16:42.505 "data_size": 7936 00:16:42.505 }, 00:16:42.505 { 00:16:42.505 "name": "BaseBdev2", 00:16:42.505 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:42.505 "is_configured": true, 00:16:42.505 "data_offset": 256, 00:16:42.505 "data_size": 7936 00:16:42.505 } 00:16:42.505 ] 00:16:42.505 }' 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.505 18:57:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.075 18:57:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:43.075 18:57:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.075 18:57:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:43.075 [2024-11-16 18:57:26.304407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:43.075 [2024-11-16 18:57:26.320755] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:16:43.075 18:57:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.075 18:57:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:43.075 [2024-11-16 18:57:26.322519] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.015 "name": "raid_bdev1", 00:16:44.015 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:44.015 "strip_size_kb": 0, 00:16:44.015 "state": "online", 00:16:44.015 "raid_level": "raid1", 00:16:44.015 "superblock": true, 00:16:44.015 "num_base_bdevs": 2, 00:16:44.015 "num_base_bdevs_discovered": 2, 00:16:44.015 "num_base_bdevs_operational": 2, 00:16:44.015 "process": { 00:16:44.015 "type": "rebuild", 00:16:44.015 "target": "spare", 00:16:44.015 "progress": { 00:16:44.015 "blocks": 2560, 00:16:44.015 "percent": 32 00:16:44.015 } 00:16:44.015 }, 00:16:44.015 "base_bdevs_list": [ 00:16:44.015 { 00:16:44.015 "name": "spare", 00:16:44.015 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:44.015 "is_configured": true, 00:16:44.015 "data_offset": 256, 00:16:44.015 "data_size": 7936 00:16:44.015 }, 00:16:44.015 { 00:16:44.015 "name": "BaseBdev2", 00:16:44.015 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:44.015 "is_configured": true, 00:16:44.015 "data_offset": 256, 00:16:44.015 "data_size": 7936 00:16:44.015 } 00:16:44.015 ] 00:16:44.015 }' 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 [2024-11-16 18:57:27.466127] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.275 [2024-11-16 18:57:27.527064] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:44.275 [2024-11-16 18:57:27.527122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.275 [2024-11-16 18:57:27.527137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:44.275 [2024-11-16 18:57:27.527149] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.275 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.276 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.276 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.276 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.276 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.276 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.276 "name": "raid_bdev1", 00:16:44.276 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:44.276 "strip_size_kb": 0, 00:16:44.276 "state": "online", 00:16:44.276 "raid_level": "raid1", 00:16:44.276 "superblock": true, 00:16:44.276 "num_base_bdevs": 2, 00:16:44.276 "num_base_bdevs_discovered": 1, 00:16:44.276 "num_base_bdevs_operational": 1, 00:16:44.276 "base_bdevs_list": [ 00:16:44.276 { 00:16:44.276 "name": null, 00:16:44.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.276 "is_configured": false, 00:16:44.276 "data_offset": 0, 00:16:44.276 "data_size": 7936 00:16:44.276 }, 00:16:44.276 { 00:16:44.276 "name": "BaseBdev2", 00:16:44.276 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:44.276 "is_configured": true, 00:16:44.276 "data_offset": 256, 00:16:44.276 "data_size": 7936 00:16:44.276 } 00:16:44.276 ] 00:16:44.276 }' 00:16:44.276 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.276 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.536 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.536 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.536 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.536 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.536 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.536 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.536 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.536 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.536 18:57:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.536 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.797 "name": "raid_bdev1", 00:16:44.797 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:44.797 "strip_size_kb": 0, 00:16:44.797 "state": "online", 00:16:44.797 "raid_level": "raid1", 00:16:44.797 "superblock": true, 00:16:44.797 "num_base_bdevs": 2, 00:16:44.797 "num_base_bdevs_discovered": 1, 00:16:44.797 "num_base_bdevs_operational": 1, 00:16:44.797 "base_bdevs_list": [ 00:16:44.797 { 00:16:44.797 "name": null, 00:16:44.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.797 "is_configured": false, 00:16:44.797 "data_offset": 0, 00:16:44.797 "data_size": 7936 00:16:44.797 }, 00:16:44.797 { 00:16:44.797 "name": "BaseBdev2", 00:16:44.797 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:44.797 "is_configured": true, 00:16:44.797 "data_offset": 256, 00:16:44.797 "data_size": 7936 00:16:44.797 } 00:16:44.797 ] 00:16:44.797 }' 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:44.797 [2024-11-16 18:57:28.120905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.797 [2024-11-16 18:57:28.136612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.797 18:57:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:44.797 [2024-11-16 18:57:28.138482] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.737 "name": "raid_bdev1", 00:16:45.737 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:45.737 "strip_size_kb": 0, 00:16:45.737 "state": "online", 00:16:45.737 "raid_level": "raid1", 00:16:45.737 "superblock": true, 00:16:45.737 "num_base_bdevs": 2, 00:16:45.737 "num_base_bdevs_discovered": 2, 00:16:45.737 "num_base_bdevs_operational": 2, 00:16:45.737 "process": { 00:16:45.737 "type": "rebuild", 00:16:45.737 "target": "spare", 00:16:45.737 "progress": { 00:16:45.737 "blocks": 2560, 00:16:45.737 "percent": 32 00:16:45.737 } 00:16:45.737 }, 00:16:45.737 "base_bdevs_list": [ 00:16:45.737 { 00:16:45.737 "name": "spare", 00:16:45.737 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:45.737 "is_configured": true, 00:16:45.737 "data_offset": 256, 00:16:45.737 "data_size": 7936 00:16:45.737 }, 00:16:45.737 { 00:16:45.737 "name": "BaseBdev2", 00:16:45.737 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:45.737 "is_configured": true, 00:16:45.737 "data_offset": 256, 00:16:45.737 "data_size": 7936 00:16:45.737 } 00:16:45.737 ] 00:16:45.737 }' 00:16:45.737 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:45.996 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=651 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.996 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.996 "name": "raid_bdev1", 00:16:45.996 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:45.996 "strip_size_kb": 0, 00:16:45.996 "state": "online", 00:16:45.996 "raid_level": "raid1", 00:16:45.996 "superblock": true, 00:16:45.996 "num_base_bdevs": 2, 00:16:45.996 "num_base_bdevs_discovered": 2, 00:16:45.996 "num_base_bdevs_operational": 2, 00:16:45.996 "process": { 00:16:45.996 "type": "rebuild", 00:16:45.996 "target": "spare", 00:16:45.996 "progress": { 00:16:45.996 "blocks": 2816, 00:16:45.996 "percent": 35 00:16:45.996 } 00:16:45.996 }, 00:16:45.996 "base_bdevs_list": [ 00:16:45.996 { 00:16:45.996 "name": "spare", 00:16:45.996 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:45.996 "is_configured": true, 00:16:45.996 "data_offset": 256, 00:16:45.996 "data_size": 7936 00:16:45.996 }, 00:16:45.996 { 00:16:45.997 "name": "BaseBdev2", 00:16:45.997 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:45.997 "is_configured": true, 00:16:45.997 "data_offset": 256, 00:16:45.997 "data_size": 7936 00:16:45.997 } 00:16:45.997 ] 00:16:45.997 }' 00:16:45.997 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.997 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.997 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.997 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.997 18:57:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.379 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.379 "name": "raid_bdev1", 00:16:47.379 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:47.379 "strip_size_kb": 0, 00:16:47.379 "state": "online", 00:16:47.379 "raid_level": "raid1", 00:16:47.379 "superblock": true, 00:16:47.379 "num_base_bdevs": 2, 00:16:47.379 "num_base_bdevs_discovered": 2, 00:16:47.379 "num_base_bdevs_operational": 2, 00:16:47.379 "process": { 00:16:47.379 "type": "rebuild", 00:16:47.379 "target": "spare", 00:16:47.379 "progress": { 00:16:47.379 "blocks": 5632, 00:16:47.379 "percent": 70 00:16:47.379 } 00:16:47.379 }, 00:16:47.380 "base_bdevs_list": [ 00:16:47.380 { 00:16:47.380 "name": "spare", 00:16:47.380 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:47.380 "is_configured": true, 00:16:47.380 "data_offset": 256, 00:16:47.380 "data_size": 7936 00:16:47.380 }, 00:16:47.380 { 00:16:47.380 "name": "BaseBdev2", 00:16:47.380 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:47.380 "is_configured": true, 00:16:47.380 "data_offset": 256, 00:16:47.380 "data_size": 7936 00:16:47.380 } 00:16:47.380 ] 00:16:47.380 }' 00:16:47.380 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.380 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.380 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.380 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.380 18:57:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.949 [2024-11-16 18:57:31.249732] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:47.949 [2024-11-16 18:57:31.249847] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:47.949 [2024-11-16 18:57:31.249976] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.208 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.208 "name": "raid_bdev1", 00:16:48.208 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:48.208 "strip_size_kb": 0, 00:16:48.208 "state": "online", 00:16:48.208 "raid_level": "raid1", 00:16:48.208 "superblock": true, 00:16:48.208 "num_base_bdevs": 2, 00:16:48.208 "num_base_bdevs_discovered": 2, 00:16:48.208 "num_base_bdevs_operational": 2, 00:16:48.208 "base_bdevs_list": [ 00:16:48.208 { 00:16:48.208 "name": "spare", 00:16:48.208 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:48.208 "is_configured": true, 00:16:48.209 "data_offset": 256, 00:16:48.209 "data_size": 7936 00:16:48.209 }, 00:16:48.209 { 00:16:48.209 "name": "BaseBdev2", 00:16:48.209 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:48.209 "is_configured": true, 00:16:48.209 "data_offset": 256, 00:16:48.209 "data_size": 7936 00:16:48.209 } 00:16:48.209 ] 00:16:48.209 }' 00:16:48.209 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.209 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:48.209 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.469 "name": "raid_bdev1", 00:16:48.469 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:48.469 "strip_size_kb": 0, 00:16:48.469 "state": "online", 00:16:48.469 "raid_level": "raid1", 00:16:48.469 "superblock": true, 00:16:48.469 "num_base_bdevs": 2, 00:16:48.469 "num_base_bdevs_discovered": 2, 00:16:48.469 "num_base_bdevs_operational": 2, 00:16:48.469 "base_bdevs_list": [ 00:16:48.469 { 00:16:48.469 "name": "spare", 00:16:48.469 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:48.469 "is_configured": true, 00:16:48.469 "data_offset": 256, 00:16:48.469 "data_size": 7936 00:16:48.469 }, 00:16:48.469 { 00:16:48.469 "name": "BaseBdev2", 00:16:48.469 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:48.469 "is_configured": true, 00:16:48.469 "data_offset": 256, 00:16:48.469 "data_size": 7936 00:16:48.469 } 00:16:48.469 ] 00:16:48.469 }' 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.469 "name": "raid_bdev1", 00:16:48.469 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:48.469 "strip_size_kb": 0, 00:16:48.469 "state": "online", 00:16:48.469 "raid_level": "raid1", 00:16:48.469 "superblock": true, 00:16:48.469 "num_base_bdevs": 2, 00:16:48.469 "num_base_bdevs_discovered": 2, 00:16:48.469 "num_base_bdevs_operational": 2, 00:16:48.469 "base_bdevs_list": [ 00:16:48.469 { 00:16:48.469 "name": "spare", 00:16:48.469 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:48.469 "is_configured": true, 00:16:48.469 "data_offset": 256, 00:16:48.469 "data_size": 7936 00:16:48.469 }, 00:16:48.469 { 00:16:48.469 "name": "BaseBdev2", 00:16:48.469 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:48.469 "is_configured": true, 00:16:48.469 "data_offset": 256, 00:16:48.469 "data_size": 7936 00:16:48.469 } 00:16:48.469 ] 00:16:48.469 }' 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.469 18:57:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.040 [2024-11-16 18:57:32.260639] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:49.040 [2024-11-16 18:57:32.260680] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:49.040 [2024-11-16 18:57:32.260753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:49.040 [2024-11-16 18:57:32.260815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:49.040 [2024-11-16 18:57:32.260824] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.040 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:49.300 /dev/nbd0 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.300 1+0 records in 00:16:49.300 1+0 records out 00:16:49.300 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420781 s, 9.7 MB/s 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:49.300 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:49.301 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.301 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.301 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:49.301 /dev/nbd1 00:16:49.301 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:49.301 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:49.301 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:49.301 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:16:49.301 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:49.561 1+0 records in 00:16:49.561 1+0 records out 00:16:49.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035208 s, 11.6 MB/s 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.561 18:57:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.821 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 [2024-11-16 18:57:33.385492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:50.082 [2024-11-16 18:57:33.385554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.082 [2024-11-16 18:57:33.385579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:50.082 [2024-11-16 18:57:33.385588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.082 [2024-11-16 18:57:33.387693] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.082 [2024-11-16 18:57:33.387726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:50.082 [2024-11-16 18:57:33.387815] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:50.082 [2024-11-16 18:57:33.387866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:50.082 [2024-11-16 18:57:33.388026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.082 spare 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 [2024-11-16 18:57:33.487928] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:50.082 [2024-11-16 18:57:33.487954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:50.082 [2024-11-16 18:57:33.488206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:16:50.082 [2024-11-16 18:57:33.488361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:50.082 [2024-11-16 18:57:33.488371] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:50.082 [2024-11-16 18:57:33.488511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.082 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.082 "name": "raid_bdev1", 00:16:50.082 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:50.082 "strip_size_kb": 0, 00:16:50.082 "state": "online", 00:16:50.082 "raid_level": "raid1", 00:16:50.082 "superblock": true, 00:16:50.082 "num_base_bdevs": 2, 00:16:50.082 "num_base_bdevs_discovered": 2, 00:16:50.082 "num_base_bdevs_operational": 2, 00:16:50.082 "base_bdevs_list": [ 00:16:50.082 { 00:16:50.082 "name": "spare", 00:16:50.082 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:50.082 "is_configured": true, 00:16:50.082 "data_offset": 256, 00:16:50.082 "data_size": 7936 00:16:50.082 }, 00:16:50.082 { 00:16:50.082 "name": "BaseBdev2", 00:16:50.082 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:50.083 "is_configured": true, 00:16:50.083 "data_offset": 256, 00:16:50.083 "data_size": 7936 00:16:50.083 } 00:16:50.083 ] 00:16:50.083 }' 00:16:50.083 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.083 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:50.653 "name": "raid_bdev1", 00:16:50.653 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:50.653 "strip_size_kb": 0, 00:16:50.653 "state": "online", 00:16:50.653 "raid_level": "raid1", 00:16:50.653 "superblock": true, 00:16:50.653 "num_base_bdevs": 2, 00:16:50.653 "num_base_bdevs_discovered": 2, 00:16:50.653 "num_base_bdevs_operational": 2, 00:16:50.653 "base_bdevs_list": [ 00:16:50.653 { 00:16:50.653 "name": "spare", 00:16:50.653 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:50.653 "is_configured": true, 00:16:50.653 "data_offset": 256, 00:16:50.653 "data_size": 7936 00:16:50.653 }, 00:16:50.653 { 00:16:50.653 "name": "BaseBdev2", 00:16:50.653 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:50.653 "is_configured": true, 00:16:50.653 "data_offset": 256, 00:16:50.653 "data_size": 7936 00:16:50.653 } 00:16:50.653 ] 00:16:50.653 }' 00:16:50.653 18:57:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.653 [2024-11-16 18:57:34.092346] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.653 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.049 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.049 "name": "raid_bdev1", 00:16:51.049 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:51.049 "strip_size_kb": 0, 00:16:51.049 "state": "online", 00:16:51.049 "raid_level": "raid1", 00:16:51.049 "superblock": true, 00:16:51.049 "num_base_bdevs": 2, 00:16:51.049 "num_base_bdevs_discovered": 1, 00:16:51.049 "num_base_bdevs_operational": 1, 00:16:51.049 "base_bdevs_list": [ 00:16:51.049 { 00:16:51.049 "name": null, 00:16:51.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.049 "is_configured": false, 00:16:51.049 "data_offset": 0, 00:16:51.049 "data_size": 7936 00:16:51.049 }, 00:16:51.049 { 00:16:51.049 "name": "BaseBdev2", 00:16:51.049 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:51.049 "is_configured": true, 00:16:51.049 "data_offset": 256, 00:16:51.049 "data_size": 7936 00:16:51.049 } 00:16:51.049 ] 00:16:51.049 }' 00:16:51.049 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.049 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.309 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:51.309 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.309 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.309 [2024-11-16 18:57:34.563581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.309 [2024-11-16 18:57:34.563843] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:51.309 [2024-11-16 18:57:34.563911] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:51.309 [2024-11-16 18:57:34.563971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:51.309 [2024-11-16 18:57:34.579396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:16:51.309 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.309 18:57:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:51.309 [2024-11-16 18:57:34.581256] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:52.250 "name": "raid_bdev1", 00:16:52.250 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:52.250 "strip_size_kb": 0, 00:16:52.250 "state": "online", 00:16:52.250 "raid_level": "raid1", 00:16:52.250 "superblock": true, 00:16:52.250 "num_base_bdevs": 2, 00:16:52.250 "num_base_bdevs_discovered": 2, 00:16:52.250 "num_base_bdevs_operational": 2, 00:16:52.250 "process": { 00:16:52.250 "type": "rebuild", 00:16:52.250 "target": "spare", 00:16:52.250 "progress": { 00:16:52.250 "blocks": 2560, 00:16:52.250 "percent": 32 00:16:52.250 } 00:16:52.250 }, 00:16:52.250 "base_bdevs_list": [ 00:16:52.250 { 00:16:52.250 "name": "spare", 00:16:52.250 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:52.250 "is_configured": true, 00:16:52.250 "data_offset": 256, 00:16:52.250 "data_size": 7936 00:16:52.250 }, 00:16:52.250 { 00:16:52.250 "name": "BaseBdev2", 00:16:52.250 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:52.250 "is_configured": true, 00:16:52.250 "data_offset": 256, 00:16:52.250 "data_size": 7936 00:16:52.250 } 00:16:52.250 ] 00:16:52.250 }' 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.250 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.250 [2024-11-16 18:57:35.716902] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.511 [2024-11-16 18:57:35.785724] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:52.511 [2024-11-16 18:57:35.785839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.511 [2024-11-16 18:57:35.785873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:52.511 [2024-11-16 18:57:35.785895] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.511 "name": "raid_bdev1", 00:16:52.511 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:52.511 "strip_size_kb": 0, 00:16:52.511 "state": "online", 00:16:52.511 "raid_level": "raid1", 00:16:52.511 "superblock": true, 00:16:52.511 "num_base_bdevs": 2, 00:16:52.511 "num_base_bdevs_discovered": 1, 00:16:52.511 "num_base_bdevs_operational": 1, 00:16:52.511 "base_bdevs_list": [ 00:16:52.511 { 00:16:52.511 "name": null, 00:16:52.511 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.511 "is_configured": false, 00:16:52.511 "data_offset": 0, 00:16:52.511 "data_size": 7936 00:16:52.511 }, 00:16:52.511 { 00:16:52.511 "name": "BaseBdev2", 00:16:52.511 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:52.511 "is_configured": true, 00:16:52.511 "data_offset": 256, 00:16:52.511 "data_size": 7936 00:16:52.511 } 00:16:52.511 ] 00:16:52.511 }' 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.511 18:57:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.082 18:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:53.082 18:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.082 18:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:53.082 [2024-11-16 18:57:36.257664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:53.082 [2024-11-16 18:57:36.257723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:53.082 [2024-11-16 18:57:36.257742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:53.082 [2024-11-16 18:57:36.257752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:53.082 [2024-11-16 18:57:36.258213] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:53.082 [2024-11-16 18:57:36.258242] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:53.082 [2024-11-16 18:57:36.258324] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:53.082 [2024-11-16 18:57:36.258339] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:53.082 [2024-11-16 18:57:36.258352] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:53.082 [2024-11-16 18:57:36.258376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:53.082 [2024-11-16 18:57:36.275152] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:16:53.082 spare 00:16:53.082 18:57:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.082 18:57:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:53.082 [2024-11-16 18:57:36.277094] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.023 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.023 "name": "raid_bdev1", 00:16:54.023 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:54.023 "strip_size_kb": 0, 00:16:54.023 "state": "online", 00:16:54.023 "raid_level": "raid1", 00:16:54.023 "superblock": true, 00:16:54.023 "num_base_bdevs": 2, 00:16:54.023 "num_base_bdevs_discovered": 2, 00:16:54.023 "num_base_bdevs_operational": 2, 00:16:54.023 "process": { 00:16:54.023 "type": "rebuild", 00:16:54.023 "target": "spare", 00:16:54.023 "progress": { 00:16:54.023 "blocks": 2560, 00:16:54.023 "percent": 32 00:16:54.023 } 00:16:54.023 }, 00:16:54.023 "base_bdevs_list": [ 00:16:54.023 { 00:16:54.023 "name": "spare", 00:16:54.023 "uuid": "7413db82-c465-599c-aac4-b169af9736fc", 00:16:54.023 "is_configured": true, 00:16:54.023 "data_offset": 256, 00:16:54.023 "data_size": 7936 00:16:54.023 }, 00:16:54.023 { 00:16:54.023 "name": "BaseBdev2", 00:16:54.023 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:54.023 "is_configured": true, 00:16:54.023 "data_offset": 256, 00:16:54.023 "data_size": 7936 00:16:54.023 } 00:16:54.023 ] 00:16:54.023 }' 00:16:54.024 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.024 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:54.024 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.024 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:54.024 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:54.024 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.024 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.024 [2024-11-16 18:57:37.440223] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.024 [2024-11-16 18:57:37.481621] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:54.024 [2024-11-16 18:57:37.481687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.024 [2024-11-16 18:57:37.481719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:54.024 [2024-11-16 18:57:37.481726] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.284 "name": "raid_bdev1", 00:16:54.284 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:54.284 "strip_size_kb": 0, 00:16:54.284 "state": "online", 00:16:54.284 "raid_level": "raid1", 00:16:54.284 "superblock": true, 00:16:54.284 "num_base_bdevs": 2, 00:16:54.284 "num_base_bdevs_discovered": 1, 00:16:54.284 "num_base_bdevs_operational": 1, 00:16:54.284 "base_bdevs_list": [ 00:16:54.284 { 00:16:54.284 "name": null, 00:16:54.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.284 "is_configured": false, 00:16:54.284 "data_offset": 0, 00:16:54.284 "data_size": 7936 00:16:54.284 }, 00:16:54.284 { 00:16:54.284 "name": "BaseBdev2", 00:16:54.284 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:54.284 "is_configured": true, 00:16:54.284 "data_offset": 256, 00:16:54.284 "data_size": 7936 00:16:54.284 } 00:16:54.284 ] 00:16:54.284 }' 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.284 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:54.545 "name": "raid_bdev1", 00:16:54.545 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:54.545 "strip_size_kb": 0, 00:16:54.545 "state": "online", 00:16:54.545 "raid_level": "raid1", 00:16:54.545 "superblock": true, 00:16:54.545 "num_base_bdevs": 2, 00:16:54.545 "num_base_bdevs_discovered": 1, 00:16:54.545 "num_base_bdevs_operational": 1, 00:16:54.545 "base_bdevs_list": [ 00:16:54.545 { 00:16:54.545 "name": null, 00:16:54.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.545 "is_configured": false, 00:16:54.545 "data_offset": 0, 00:16:54.545 "data_size": 7936 00:16:54.545 }, 00:16:54.545 { 00:16:54.545 "name": "BaseBdev2", 00:16:54.545 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:54.545 "is_configured": true, 00:16:54.545 "data_offset": 256, 00:16:54.545 "data_size": 7936 00:16:54.545 } 00:16:54.545 ] 00:16:54.545 }' 00:16:54.545 18:57:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.806 [2024-11-16 18:57:38.113587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:54.806 [2024-11-16 18:57:38.113658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:54.806 [2024-11-16 18:57:38.113694] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:54.806 [2024-11-16 18:57:38.113711] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:54.806 [2024-11-16 18:57:38.114122] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:54.806 [2024-11-16 18:57:38.114146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:54.806 [2024-11-16 18:57:38.114222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:54.806 [2024-11-16 18:57:38.114239] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:54.806 [2024-11-16 18:57:38.114249] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:54.806 [2024-11-16 18:57:38.114259] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:54.806 BaseBdev1 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.806 18:57:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.746 "name": "raid_bdev1", 00:16:55.746 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:55.746 "strip_size_kb": 0, 00:16:55.746 "state": "online", 00:16:55.746 "raid_level": "raid1", 00:16:55.746 "superblock": true, 00:16:55.746 "num_base_bdevs": 2, 00:16:55.746 "num_base_bdevs_discovered": 1, 00:16:55.746 "num_base_bdevs_operational": 1, 00:16:55.746 "base_bdevs_list": [ 00:16:55.746 { 00:16:55.746 "name": null, 00:16:55.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.746 "is_configured": false, 00:16:55.746 "data_offset": 0, 00:16:55.746 "data_size": 7936 00:16:55.746 }, 00:16:55.746 { 00:16:55.746 "name": "BaseBdev2", 00:16:55.746 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:55.746 "is_configured": true, 00:16:55.746 "data_offset": 256, 00:16:55.746 "data_size": 7936 00:16:55.746 } 00:16:55.746 ] 00:16:55.746 }' 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.746 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:56.317 "name": "raid_bdev1", 00:16:56.317 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:56.317 "strip_size_kb": 0, 00:16:56.317 "state": "online", 00:16:56.317 "raid_level": "raid1", 00:16:56.317 "superblock": true, 00:16:56.317 "num_base_bdevs": 2, 00:16:56.317 "num_base_bdevs_discovered": 1, 00:16:56.317 "num_base_bdevs_operational": 1, 00:16:56.317 "base_bdevs_list": [ 00:16:56.317 { 00:16:56.317 "name": null, 00:16:56.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.317 "is_configured": false, 00:16:56.317 "data_offset": 0, 00:16:56.317 "data_size": 7936 00:16:56.317 }, 00:16:56.317 { 00:16:56.317 "name": "BaseBdev2", 00:16:56.317 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:56.317 "is_configured": true, 00:16:56.317 "data_offset": 256, 00:16:56.317 "data_size": 7936 00:16:56.317 } 00:16:56.317 ] 00:16:56.317 }' 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.317 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:56.318 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.318 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.318 [2024-11-16 18:57:39.662956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.318 [2024-11-16 18:57:39.663113] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:56.318 [2024-11-16 18:57:39.663127] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:56.318 request: 00:16:56.318 { 00:16:56.318 "base_bdev": "BaseBdev1", 00:16:56.318 "raid_bdev": "raid_bdev1", 00:16:56.318 "method": "bdev_raid_add_base_bdev", 00:16:56.318 "req_id": 1 00:16:56.318 } 00:16:56.318 Got JSON-RPC error response 00:16:56.318 response: 00:16:56.318 { 00:16:56.318 "code": -22, 00:16:56.318 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:56.318 } 00:16:56.318 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:56.318 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:16:56.318 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:56.318 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:56.318 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:56.318 18:57:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:57.258 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:16:57.258 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:57.258 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.258 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:57.258 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.259 "name": "raid_bdev1", 00:16:57.259 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:57.259 "strip_size_kb": 0, 00:16:57.259 "state": "online", 00:16:57.259 "raid_level": "raid1", 00:16:57.259 "superblock": true, 00:16:57.259 "num_base_bdevs": 2, 00:16:57.259 "num_base_bdevs_discovered": 1, 00:16:57.259 "num_base_bdevs_operational": 1, 00:16:57.259 "base_bdevs_list": [ 00:16:57.259 { 00:16:57.259 "name": null, 00:16:57.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.259 "is_configured": false, 00:16:57.259 "data_offset": 0, 00:16:57.259 "data_size": 7936 00:16:57.259 }, 00:16:57.259 { 00:16:57.259 "name": "BaseBdev2", 00:16:57.259 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:57.259 "is_configured": true, 00:16:57.259 "data_offset": 256, 00:16:57.259 "data_size": 7936 00:16:57.259 } 00:16:57.259 ] 00:16:57.259 }' 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.259 18:57:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.829 "name": "raid_bdev1", 00:16:57.829 "uuid": "35576137-2765-45fc-a54f-c03f361ae707", 00:16:57.829 "strip_size_kb": 0, 00:16:57.829 "state": "online", 00:16:57.829 "raid_level": "raid1", 00:16:57.829 "superblock": true, 00:16:57.829 "num_base_bdevs": 2, 00:16:57.829 "num_base_bdevs_discovered": 1, 00:16:57.829 "num_base_bdevs_operational": 1, 00:16:57.829 "base_bdevs_list": [ 00:16:57.829 { 00:16:57.829 "name": null, 00:16:57.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.829 "is_configured": false, 00:16:57.829 "data_offset": 0, 00:16:57.829 "data_size": 7936 00:16:57.829 }, 00:16:57.829 { 00:16:57.829 "name": "BaseBdev2", 00:16:57.829 "uuid": "a9f475a9-b05c-543d-9ea9-c311a6191d72", 00:16:57.829 "is_configured": true, 00:16:57.829 "data_offset": 256, 00:16:57.829 "data_size": 7936 00:16:57.829 } 00:16:57.829 ] 00:16:57.829 }' 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86150 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86150 ']' 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86150 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.829 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86150 00:16:58.090 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:58.090 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:58.090 killing process with pid 86150 00:16:58.090 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86150' 00:16:58.090 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86150 00:16:58.090 Received shutdown signal, test time was about 60.000000 seconds 00:16:58.090 00:16:58.090 Latency(us) 00:16:58.090 [2024-11-16T18:57:41.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.090 [2024-11-16T18:57:41.562Z] =================================================================================================================== 00:16:58.090 [2024-11-16T18:57:41.562Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:58.090 [2024-11-16 18:57:41.317902] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.090 [2024-11-16 18:57:41.318017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.090 [2024-11-16 18:57:41.318082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.090 [2024-11-16 18:57:41.318093] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:58.090 18:57:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86150 00:16:58.350 [2024-11-16 18:57:41.602185] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.291 18:57:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:16:59.291 00:16:59.291 real 0m19.406s 00:16:59.291 user 0m25.296s 00:16:59.291 sys 0m2.453s 00:16:59.291 18:57:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:59.291 18:57:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.291 ************************************ 00:16:59.291 END TEST raid_rebuild_test_sb_4k 00:16:59.291 ************************************ 00:16:59.291 18:57:42 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:16:59.291 18:57:42 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:16:59.291 18:57:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:59.291 18:57:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:59.291 18:57:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.291 ************************************ 00:16:59.291 START TEST raid_state_function_test_sb_md_separate 00:16:59.291 ************************************ 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86836 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:59.291 Process raid pid: 86836 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86836' 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86836 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86836 ']' 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:59.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:59.291 18:57:42 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:16:59.552 [2024-11-16 18:57:42.795211] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:59.552 [2024-11-16 18:57:42.795337] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.552 [2024-11-16 18:57:42.968294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.812 [2024-11-16 18:57:43.073833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.812 [2024-11-16 18:57:43.263367] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:59.812 [2024-11-16 18:57:43.263406] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.383 [2024-11-16 18:57:43.610729] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.383 [2024-11-16 18:57:43.610778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.383 [2024-11-16 18:57:43.610787] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.383 [2024-11-16 18:57:43.610812] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.383 "name": "Existed_Raid", 00:17:00.383 "uuid": "ea45b7a6-1feb-4bb6-bf45-1d52acb67a8a", 00:17:00.383 "strip_size_kb": 0, 00:17:00.383 "state": "configuring", 00:17:00.383 "raid_level": "raid1", 00:17:00.383 "superblock": true, 00:17:00.383 "num_base_bdevs": 2, 00:17:00.383 "num_base_bdevs_discovered": 0, 00:17:00.383 "num_base_bdevs_operational": 2, 00:17:00.383 "base_bdevs_list": [ 00:17:00.383 { 00:17:00.383 "name": "BaseBdev1", 00:17:00.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.383 "is_configured": false, 00:17:00.383 "data_offset": 0, 00:17:00.383 "data_size": 0 00:17:00.383 }, 00:17:00.383 { 00:17:00.383 "name": "BaseBdev2", 00:17:00.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.383 "is_configured": false, 00:17:00.383 "data_offset": 0, 00:17:00.383 "data_size": 0 00:17:00.383 } 00:17:00.383 ] 00:17:00.383 }' 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.383 18:57:43 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.644 [2024-11-16 18:57:44.061859] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:00.644 [2024-11-16 18:57:44.061892] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.644 [2024-11-16 18:57:44.073827] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.644 [2024-11-16 18:57:44.073866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.644 [2024-11-16 18:57:44.073890] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.644 [2024-11-16 18:57:44.073901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.644 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.905 [2024-11-16 18:57:44.119504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:00.905 BaseBdev1 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.905 [ 00:17:00.905 { 00:17:00.905 "name": "BaseBdev1", 00:17:00.905 "aliases": [ 00:17:00.905 "ff8f585a-a0fc-44ca-89b5-dbda9c685d1f" 00:17:00.905 ], 00:17:00.905 "product_name": "Malloc disk", 00:17:00.905 "block_size": 4096, 00:17:00.905 "num_blocks": 8192, 00:17:00.905 "uuid": "ff8f585a-a0fc-44ca-89b5-dbda9c685d1f", 00:17:00.905 "md_size": 32, 00:17:00.905 "md_interleave": false, 00:17:00.905 "dif_type": 0, 00:17:00.905 "assigned_rate_limits": { 00:17:00.905 "rw_ios_per_sec": 0, 00:17:00.905 "rw_mbytes_per_sec": 0, 00:17:00.905 "r_mbytes_per_sec": 0, 00:17:00.905 "w_mbytes_per_sec": 0 00:17:00.905 }, 00:17:00.905 "claimed": true, 00:17:00.905 "claim_type": "exclusive_write", 00:17:00.905 "zoned": false, 00:17:00.905 "supported_io_types": { 00:17:00.905 "read": true, 00:17:00.905 "write": true, 00:17:00.905 "unmap": true, 00:17:00.905 "flush": true, 00:17:00.905 "reset": true, 00:17:00.905 "nvme_admin": false, 00:17:00.905 "nvme_io": false, 00:17:00.905 "nvme_io_md": false, 00:17:00.905 "write_zeroes": true, 00:17:00.905 "zcopy": true, 00:17:00.905 "get_zone_info": false, 00:17:00.905 "zone_management": false, 00:17:00.905 "zone_append": false, 00:17:00.905 "compare": false, 00:17:00.905 "compare_and_write": false, 00:17:00.905 "abort": true, 00:17:00.905 "seek_hole": false, 00:17:00.905 "seek_data": false, 00:17:00.905 "copy": true, 00:17:00.905 "nvme_iov_md": false 00:17:00.905 }, 00:17:00.905 "memory_domains": [ 00:17:00.905 { 00:17:00.905 "dma_device_id": "system", 00:17:00.905 "dma_device_type": 1 00:17:00.905 }, 00:17:00.905 { 00:17:00.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.905 "dma_device_type": 2 00:17:00.905 } 00:17:00.905 ], 00:17:00.905 "driver_specific": {} 00:17:00.905 } 00:17:00.905 ] 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:00.905 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.906 "name": "Existed_Raid", 00:17:00.906 "uuid": "f29dd49e-460b-4cc1-9c1d-760cff46f5e4", 00:17:00.906 "strip_size_kb": 0, 00:17:00.906 "state": "configuring", 00:17:00.906 "raid_level": "raid1", 00:17:00.906 "superblock": true, 00:17:00.906 "num_base_bdevs": 2, 00:17:00.906 "num_base_bdevs_discovered": 1, 00:17:00.906 "num_base_bdevs_operational": 2, 00:17:00.906 "base_bdevs_list": [ 00:17:00.906 { 00:17:00.906 "name": "BaseBdev1", 00:17:00.906 "uuid": "ff8f585a-a0fc-44ca-89b5-dbda9c685d1f", 00:17:00.906 "is_configured": true, 00:17:00.906 "data_offset": 256, 00:17:00.906 "data_size": 7936 00:17:00.906 }, 00:17:00.906 { 00:17:00.906 "name": "BaseBdev2", 00:17:00.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.906 "is_configured": false, 00:17:00.906 "data_offset": 0, 00:17:00.906 "data_size": 0 00:17:00.906 } 00:17:00.906 ] 00:17:00.906 }' 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.906 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.167 [2024-11-16 18:57:44.582750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.167 [2024-11-16 18:57:44.582795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.167 [2024-11-16 18:57:44.594770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.167 [2024-11-16 18:57:44.596531] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.167 [2024-11-16 18:57:44.596575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.167 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.427 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.427 "name": "Existed_Raid", 00:17:01.427 "uuid": "f7491ad5-a32e-4fcb-b436-e917fcfa0426", 00:17:01.427 "strip_size_kb": 0, 00:17:01.427 "state": "configuring", 00:17:01.427 "raid_level": "raid1", 00:17:01.427 "superblock": true, 00:17:01.427 "num_base_bdevs": 2, 00:17:01.427 "num_base_bdevs_discovered": 1, 00:17:01.427 "num_base_bdevs_operational": 2, 00:17:01.427 "base_bdevs_list": [ 00:17:01.427 { 00:17:01.427 "name": "BaseBdev1", 00:17:01.427 "uuid": "ff8f585a-a0fc-44ca-89b5-dbda9c685d1f", 00:17:01.427 "is_configured": true, 00:17:01.427 "data_offset": 256, 00:17:01.427 "data_size": 7936 00:17:01.427 }, 00:17:01.427 { 00:17:01.427 "name": "BaseBdev2", 00:17:01.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.427 "is_configured": false, 00:17:01.427 "data_offset": 0, 00:17:01.427 "data_size": 0 00:17:01.427 } 00:17:01.427 ] 00:17:01.427 }' 00:17:01.427 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.427 18:57:44 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.688 [2024-11-16 18:57:45.115694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:01.688 [2024-11-16 18:57:45.115921] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:01.688 [2024-11-16 18:57:45.115935] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:01.688 [2024-11-16 18:57:45.116032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:01.688 [2024-11-16 18:57:45.116154] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:01.688 [2024-11-16 18:57:45.116172] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:01.688 [2024-11-16 18:57:45.116269] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:01.688 BaseBdev2 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.688 [ 00:17:01.688 { 00:17:01.688 "name": "BaseBdev2", 00:17:01.688 "aliases": [ 00:17:01.688 "07392025-4166-4504-94c6-aa4d5a6ad4fe" 00:17:01.688 ], 00:17:01.688 "product_name": "Malloc disk", 00:17:01.688 "block_size": 4096, 00:17:01.688 "num_blocks": 8192, 00:17:01.688 "uuid": "07392025-4166-4504-94c6-aa4d5a6ad4fe", 00:17:01.688 "md_size": 32, 00:17:01.688 "md_interleave": false, 00:17:01.688 "dif_type": 0, 00:17:01.688 "assigned_rate_limits": { 00:17:01.688 "rw_ios_per_sec": 0, 00:17:01.688 "rw_mbytes_per_sec": 0, 00:17:01.688 "r_mbytes_per_sec": 0, 00:17:01.688 "w_mbytes_per_sec": 0 00:17:01.688 }, 00:17:01.688 "claimed": true, 00:17:01.688 "claim_type": "exclusive_write", 00:17:01.688 "zoned": false, 00:17:01.688 "supported_io_types": { 00:17:01.688 "read": true, 00:17:01.688 "write": true, 00:17:01.688 "unmap": true, 00:17:01.688 "flush": true, 00:17:01.688 "reset": true, 00:17:01.688 "nvme_admin": false, 00:17:01.688 "nvme_io": false, 00:17:01.688 "nvme_io_md": false, 00:17:01.688 "write_zeroes": true, 00:17:01.688 "zcopy": true, 00:17:01.688 "get_zone_info": false, 00:17:01.688 "zone_management": false, 00:17:01.688 "zone_append": false, 00:17:01.688 "compare": false, 00:17:01.688 "compare_and_write": false, 00:17:01.688 "abort": true, 00:17:01.688 "seek_hole": false, 00:17:01.688 "seek_data": false, 00:17:01.688 "copy": true, 00:17:01.688 "nvme_iov_md": false 00:17:01.688 }, 00:17:01.688 "memory_domains": [ 00:17:01.688 { 00:17:01.688 "dma_device_id": "system", 00:17:01.688 "dma_device_type": 1 00:17:01.688 }, 00:17:01.688 { 00:17:01.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.688 "dma_device_type": 2 00:17:01.688 } 00:17:01.688 ], 00:17:01.688 "driver_specific": {} 00:17:01.688 } 00:17:01.688 ] 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.688 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.948 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.948 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.948 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.948 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:01.948 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.948 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.948 "name": "Existed_Raid", 00:17:01.948 "uuid": "f7491ad5-a32e-4fcb-b436-e917fcfa0426", 00:17:01.948 "strip_size_kb": 0, 00:17:01.948 "state": "online", 00:17:01.948 "raid_level": "raid1", 00:17:01.948 "superblock": true, 00:17:01.948 "num_base_bdevs": 2, 00:17:01.948 "num_base_bdevs_discovered": 2, 00:17:01.948 "num_base_bdevs_operational": 2, 00:17:01.948 "base_bdevs_list": [ 00:17:01.948 { 00:17:01.948 "name": "BaseBdev1", 00:17:01.948 "uuid": "ff8f585a-a0fc-44ca-89b5-dbda9c685d1f", 00:17:01.948 "is_configured": true, 00:17:01.948 "data_offset": 256, 00:17:01.948 "data_size": 7936 00:17:01.948 }, 00:17:01.948 { 00:17:01.948 "name": "BaseBdev2", 00:17:01.948 "uuid": "07392025-4166-4504-94c6-aa4d5a6ad4fe", 00:17:01.948 "is_configured": true, 00:17:01.948 "data_offset": 256, 00:17:01.948 "data_size": 7936 00:17:01.948 } 00:17:01.948 ] 00:17:01.948 }' 00:17:01.948 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.948 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.208 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:02.208 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:02.208 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:02.208 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:02.208 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:02.208 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:02.208 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:02.208 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:02.209 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.209 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.209 [2024-11-16 18:57:45.631118] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:02.209 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.209 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:02.209 "name": "Existed_Raid", 00:17:02.209 "aliases": [ 00:17:02.209 "f7491ad5-a32e-4fcb-b436-e917fcfa0426" 00:17:02.209 ], 00:17:02.209 "product_name": "Raid Volume", 00:17:02.209 "block_size": 4096, 00:17:02.209 "num_blocks": 7936, 00:17:02.209 "uuid": "f7491ad5-a32e-4fcb-b436-e917fcfa0426", 00:17:02.209 "md_size": 32, 00:17:02.209 "md_interleave": false, 00:17:02.209 "dif_type": 0, 00:17:02.209 "assigned_rate_limits": { 00:17:02.209 "rw_ios_per_sec": 0, 00:17:02.209 "rw_mbytes_per_sec": 0, 00:17:02.209 "r_mbytes_per_sec": 0, 00:17:02.209 "w_mbytes_per_sec": 0 00:17:02.209 }, 00:17:02.209 "claimed": false, 00:17:02.209 "zoned": false, 00:17:02.209 "supported_io_types": { 00:17:02.209 "read": true, 00:17:02.209 "write": true, 00:17:02.209 "unmap": false, 00:17:02.209 "flush": false, 00:17:02.209 "reset": true, 00:17:02.209 "nvme_admin": false, 00:17:02.209 "nvme_io": false, 00:17:02.209 "nvme_io_md": false, 00:17:02.209 "write_zeroes": true, 00:17:02.209 "zcopy": false, 00:17:02.209 "get_zone_info": false, 00:17:02.209 "zone_management": false, 00:17:02.209 "zone_append": false, 00:17:02.209 "compare": false, 00:17:02.209 "compare_and_write": false, 00:17:02.209 "abort": false, 00:17:02.209 "seek_hole": false, 00:17:02.209 "seek_data": false, 00:17:02.209 "copy": false, 00:17:02.209 "nvme_iov_md": false 00:17:02.209 }, 00:17:02.209 "memory_domains": [ 00:17:02.209 { 00:17:02.209 "dma_device_id": "system", 00:17:02.209 "dma_device_type": 1 00:17:02.209 }, 00:17:02.209 { 00:17:02.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.209 "dma_device_type": 2 00:17:02.209 }, 00:17:02.209 { 00:17:02.209 "dma_device_id": "system", 00:17:02.209 "dma_device_type": 1 00:17:02.209 }, 00:17:02.209 { 00:17:02.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.209 "dma_device_type": 2 00:17:02.209 } 00:17:02.209 ], 00:17:02.209 "driver_specific": { 00:17:02.209 "raid": { 00:17:02.209 "uuid": "f7491ad5-a32e-4fcb-b436-e917fcfa0426", 00:17:02.209 "strip_size_kb": 0, 00:17:02.209 "state": "online", 00:17:02.209 "raid_level": "raid1", 00:17:02.209 "superblock": true, 00:17:02.209 "num_base_bdevs": 2, 00:17:02.209 "num_base_bdevs_discovered": 2, 00:17:02.209 "num_base_bdevs_operational": 2, 00:17:02.209 "base_bdevs_list": [ 00:17:02.209 { 00:17:02.209 "name": "BaseBdev1", 00:17:02.209 "uuid": "ff8f585a-a0fc-44ca-89b5-dbda9c685d1f", 00:17:02.209 "is_configured": true, 00:17:02.209 "data_offset": 256, 00:17:02.209 "data_size": 7936 00:17:02.209 }, 00:17:02.209 { 00:17:02.209 "name": "BaseBdev2", 00:17:02.209 "uuid": "07392025-4166-4504-94c6-aa4d5a6ad4fe", 00:17:02.209 "is_configured": true, 00:17:02.209 "data_offset": 256, 00:17:02.209 "data_size": 7936 00:17:02.209 } 00:17:02.209 ] 00:17:02.209 } 00:17:02.209 } 00:17:02.209 }' 00:17:02.209 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:02.469 BaseBdev2' 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.469 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.469 [2024-11-16 18:57:45.866476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:02.729 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.730 18:57:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.730 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.730 "name": "Existed_Raid", 00:17:02.730 "uuid": "f7491ad5-a32e-4fcb-b436-e917fcfa0426", 00:17:02.730 "strip_size_kb": 0, 00:17:02.730 "state": "online", 00:17:02.730 "raid_level": "raid1", 00:17:02.730 "superblock": true, 00:17:02.730 "num_base_bdevs": 2, 00:17:02.730 "num_base_bdevs_discovered": 1, 00:17:02.730 "num_base_bdevs_operational": 1, 00:17:02.730 "base_bdevs_list": [ 00:17:02.730 { 00:17:02.730 "name": null, 00:17:02.730 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.730 "is_configured": false, 00:17:02.730 "data_offset": 0, 00:17:02.730 "data_size": 7936 00:17:02.730 }, 00:17:02.730 { 00:17:02.730 "name": "BaseBdev2", 00:17:02.730 "uuid": "07392025-4166-4504-94c6-aa4d5a6ad4fe", 00:17:02.730 "is_configured": true, 00:17:02.730 "data_offset": 256, 00:17:02.730 "data_size": 7936 00:17:02.730 } 00:17:02.730 ] 00:17:02.730 }' 00:17:02.730 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.730 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.990 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:02.990 [2024-11-16 18:57:46.450962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:02.990 [2024-11-16 18:57:46.451063] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:03.250 [2024-11-16 18:57:46.546691] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:03.250 [2024-11-16 18:57:46.546757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:03.250 [2024-11-16 18:57:46.546769] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86836 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86836 ']' 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86836 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86836 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:03.251 killing process with pid 86836 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86836' 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86836 00:17:03.251 [2024-11-16 18:57:46.622798] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:03.251 18:57:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86836 00:17:03.251 [2024-11-16 18:57:46.639208] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:04.201 18:57:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:04.201 00:17:04.201 real 0m4.966s 00:17:04.201 user 0m7.222s 00:17:04.201 sys 0m0.859s 00:17:04.201 18:57:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:04.201 18:57:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.201 ************************************ 00:17:04.201 END TEST raid_state_function_test_sb_md_separate 00:17:04.201 ************************************ 00:17:04.464 18:57:47 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:04.464 18:57:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:04.464 18:57:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:04.464 18:57:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:04.464 ************************************ 00:17:04.464 START TEST raid_superblock_test_md_separate 00:17:04.464 ************************************ 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:04.464 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87083 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87083 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87083 ']' 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.465 18:57:47 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:04.465 [2024-11-16 18:57:47.837450] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:04.465 [2024-11-16 18:57:47.837578] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87083 ] 00:17:04.724 [2024-11-16 18:57:48.017773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.724 [2024-11-16 18:57:48.123755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.984 [2024-11-16 18:57:48.307210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:04.984 [2024-11-16 18:57:48.307252] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.245 malloc1 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.245 [2024-11-16 18:57:48.683144] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:05.245 [2024-11-16 18:57:48.683219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.245 [2024-11-16 18:57:48.683241] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:05.245 [2024-11-16 18:57:48.683250] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.245 [2024-11-16 18:57:48.685170] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.245 [2024-11-16 18:57:48.685209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:05.245 pt1 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.245 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:05.246 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:05.246 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:05.246 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:05.246 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:05.246 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:05.246 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:05.246 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.246 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.506 malloc2 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.506 [2024-11-16 18:57:48.737082] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:05.506 [2024-11-16 18:57:48.737138] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:05.506 [2024-11-16 18:57:48.737173] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:05.506 [2024-11-16 18:57:48.737181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:05.506 [2024-11-16 18:57:48.739016] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:05.506 [2024-11-16 18:57:48.739051] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:05.506 pt2 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.506 [2024-11-16 18:57:48.749083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:05.506 [2024-11-16 18:57:48.750847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:05.506 [2024-11-16 18:57:48.751028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:05.506 [2024-11-16 18:57:48.751043] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:05.506 [2024-11-16 18:57:48.751115] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:05.506 [2024-11-16 18:57:48.751239] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:05.506 [2024-11-16 18:57:48.751260] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:05.506 [2024-11-16 18:57:48.751366] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:05.506 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.507 "name": "raid_bdev1", 00:17:05.507 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:05.507 "strip_size_kb": 0, 00:17:05.507 "state": "online", 00:17:05.507 "raid_level": "raid1", 00:17:05.507 "superblock": true, 00:17:05.507 "num_base_bdevs": 2, 00:17:05.507 "num_base_bdevs_discovered": 2, 00:17:05.507 "num_base_bdevs_operational": 2, 00:17:05.507 "base_bdevs_list": [ 00:17:05.507 { 00:17:05.507 "name": "pt1", 00:17:05.507 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:05.507 "is_configured": true, 00:17:05.507 "data_offset": 256, 00:17:05.507 "data_size": 7936 00:17:05.507 }, 00:17:05.507 { 00:17:05.507 "name": "pt2", 00:17:05.507 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.507 "is_configured": true, 00:17:05.507 "data_offset": 256, 00:17:05.507 "data_size": 7936 00:17:05.507 } 00:17:05.507 ] 00:17:05.507 }' 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.507 18:57:48 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.767 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:05.767 [2024-11-16 18:57:49.228496] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:06.028 "name": "raid_bdev1", 00:17:06.028 "aliases": [ 00:17:06.028 "c8be51cd-1a55-43fc-929c-13ff5b905fdb" 00:17:06.028 ], 00:17:06.028 "product_name": "Raid Volume", 00:17:06.028 "block_size": 4096, 00:17:06.028 "num_blocks": 7936, 00:17:06.028 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:06.028 "md_size": 32, 00:17:06.028 "md_interleave": false, 00:17:06.028 "dif_type": 0, 00:17:06.028 "assigned_rate_limits": { 00:17:06.028 "rw_ios_per_sec": 0, 00:17:06.028 "rw_mbytes_per_sec": 0, 00:17:06.028 "r_mbytes_per_sec": 0, 00:17:06.028 "w_mbytes_per_sec": 0 00:17:06.028 }, 00:17:06.028 "claimed": false, 00:17:06.028 "zoned": false, 00:17:06.028 "supported_io_types": { 00:17:06.028 "read": true, 00:17:06.028 "write": true, 00:17:06.028 "unmap": false, 00:17:06.028 "flush": false, 00:17:06.028 "reset": true, 00:17:06.028 "nvme_admin": false, 00:17:06.028 "nvme_io": false, 00:17:06.028 "nvme_io_md": false, 00:17:06.028 "write_zeroes": true, 00:17:06.028 "zcopy": false, 00:17:06.028 "get_zone_info": false, 00:17:06.028 "zone_management": false, 00:17:06.028 "zone_append": false, 00:17:06.028 "compare": false, 00:17:06.028 "compare_and_write": false, 00:17:06.028 "abort": false, 00:17:06.028 "seek_hole": false, 00:17:06.028 "seek_data": false, 00:17:06.028 "copy": false, 00:17:06.028 "nvme_iov_md": false 00:17:06.028 }, 00:17:06.028 "memory_domains": [ 00:17:06.028 { 00:17:06.028 "dma_device_id": "system", 00:17:06.028 "dma_device_type": 1 00:17:06.028 }, 00:17:06.028 { 00:17:06.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.028 "dma_device_type": 2 00:17:06.028 }, 00:17:06.028 { 00:17:06.028 "dma_device_id": "system", 00:17:06.028 "dma_device_type": 1 00:17:06.028 }, 00:17:06.028 { 00:17:06.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.028 "dma_device_type": 2 00:17:06.028 } 00:17:06.028 ], 00:17:06.028 "driver_specific": { 00:17:06.028 "raid": { 00:17:06.028 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:06.028 "strip_size_kb": 0, 00:17:06.028 "state": "online", 00:17:06.028 "raid_level": "raid1", 00:17:06.028 "superblock": true, 00:17:06.028 "num_base_bdevs": 2, 00:17:06.028 "num_base_bdevs_discovered": 2, 00:17:06.028 "num_base_bdevs_operational": 2, 00:17:06.028 "base_bdevs_list": [ 00:17:06.028 { 00:17:06.028 "name": "pt1", 00:17:06.028 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.028 "is_configured": true, 00:17:06.028 "data_offset": 256, 00:17:06.028 "data_size": 7936 00:17:06.028 }, 00:17:06.028 { 00:17:06.028 "name": "pt2", 00:17:06.028 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.028 "is_configured": true, 00:17:06.028 "data_offset": 256, 00:17:06.028 "data_size": 7936 00:17:06.028 } 00:17:06.028 ] 00:17:06.028 } 00:17:06.028 } 00:17:06.028 }' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:06.028 pt2' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:06.028 [2024-11-16 18:57:49.396232] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c8be51cd-1a55-43fc-929c-13ff5b905fdb 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z c8be51cd-1a55-43fc-929c-13ff5b905fdb ']' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.028 [2024-11-16 18:57:49.439899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.028 [2024-11-16 18:57:49.439925] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.028 [2024-11-16 18:57:49.439993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.028 [2024-11-16 18:57:49.440048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.028 [2024-11-16 18:57:49.440059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.028 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.289 [2024-11-16 18:57:49.575712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:06.289 [2024-11-16 18:57:49.577428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:06.289 [2024-11-16 18:57:49.577509] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:06.289 [2024-11-16 18:57:49.577577] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:06.289 [2024-11-16 18:57:49.577592] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.289 [2024-11-16 18:57:49.577602] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:06.289 request: 00:17:06.289 { 00:17:06.289 "name": "raid_bdev1", 00:17:06.289 "raid_level": "raid1", 00:17:06.289 "base_bdevs": [ 00:17:06.289 "malloc1", 00:17:06.289 "malloc2" 00:17:06.289 ], 00:17:06.289 "superblock": false, 00:17:06.289 "method": "bdev_raid_create", 00:17:06.289 "req_id": 1 00:17:06.289 } 00:17:06.289 Got JSON-RPC error response 00:17:06.289 response: 00:17:06.289 { 00:17:06.289 "code": -17, 00:17:06.289 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:06.289 } 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.289 [2024-11-16 18:57:49.639574] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:06.289 [2024-11-16 18:57:49.639619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.289 [2024-11-16 18:57:49.639633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:06.289 [2024-11-16 18:57:49.639659] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.289 [2024-11-16 18:57:49.641455] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.289 [2024-11-16 18:57:49.641494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:06.289 [2024-11-16 18:57:49.641548] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:06.289 [2024-11-16 18:57:49.641605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:06.289 pt1 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.289 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.289 "name": "raid_bdev1", 00:17:06.289 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:06.289 "strip_size_kb": 0, 00:17:06.289 "state": "configuring", 00:17:06.289 "raid_level": "raid1", 00:17:06.289 "superblock": true, 00:17:06.289 "num_base_bdevs": 2, 00:17:06.289 "num_base_bdevs_discovered": 1, 00:17:06.289 "num_base_bdevs_operational": 2, 00:17:06.289 "base_bdevs_list": [ 00:17:06.289 { 00:17:06.289 "name": "pt1", 00:17:06.289 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.289 "is_configured": true, 00:17:06.289 "data_offset": 256, 00:17:06.289 "data_size": 7936 00:17:06.289 }, 00:17:06.289 { 00:17:06.289 "name": null, 00:17:06.289 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.289 "is_configured": false, 00:17:06.289 "data_offset": 256, 00:17:06.289 "data_size": 7936 00:17:06.290 } 00:17:06.290 ] 00:17:06.290 }' 00:17:06.290 18:57:49 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.290 18:57:49 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.859 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:06.859 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:06.859 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:06.859 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:06.859 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.859 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.859 [2024-11-16 18:57:50.110758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:06.859 [2024-11-16 18:57:50.110835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:06.859 [2024-11-16 18:57:50.110852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:06.860 [2024-11-16 18:57:50.110862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:06.860 [2024-11-16 18:57:50.111022] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:06.860 [2024-11-16 18:57:50.111037] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:06.860 [2024-11-16 18:57:50.111073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:06.860 [2024-11-16 18:57:50.111091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:06.860 [2024-11-16 18:57:50.111182] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:06.860 [2024-11-16 18:57:50.111201] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:06.860 [2024-11-16 18:57:50.111262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:06.860 [2024-11-16 18:57:50.111380] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:06.860 [2024-11-16 18:57:50.111395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:06.860 [2024-11-16 18:57:50.111482] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.860 pt2 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.860 "name": "raid_bdev1", 00:17:06.860 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:06.860 "strip_size_kb": 0, 00:17:06.860 "state": "online", 00:17:06.860 "raid_level": "raid1", 00:17:06.860 "superblock": true, 00:17:06.860 "num_base_bdevs": 2, 00:17:06.860 "num_base_bdevs_discovered": 2, 00:17:06.860 "num_base_bdevs_operational": 2, 00:17:06.860 "base_bdevs_list": [ 00:17:06.860 { 00:17:06.860 "name": "pt1", 00:17:06.860 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:06.860 "is_configured": true, 00:17:06.860 "data_offset": 256, 00:17:06.860 "data_size": 7936 00:17:06.860 }, 00:17:06.860 { 00:17:06.860 "name": "pt2", 00:17:06.860 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.860 "is_configured": true, 00:17:06.860 "data_offset": 256, 00:17:06.860 "data_size": 7936 00:17:06.860 } 00:17:06.860 ] 00:17:06.860 }' 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.860 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.120 [2024-11-16 18:57:50.562210] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.120 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:07.381 "name": "raid_bdev1", 00:17:07.381 "aliases": [ 00:17:07.381 "c8be51cd-1a55-43fc-929c-13ff5b905fdb" 00:17:07.381 ], 00:17:07.381 "product_name": "Raid Volume", 00:17:07.381 "block_size": 4096, 00:17:07.381 "num_blocks": 7936, 00:17:07.381 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:07.381 "md_size": 32, 00:17:07.381 "md_interleave": false, 00:17:07.381 "dif_type": 0, 00:17:07.381 "assigned_rate_limits": { 00:17:07.381 "rw_ios_per_sec": 0, 00:17:07.381 "rw_mbytes_per_sec": 0, 00:17:07.381 "r_mbytes_per_sec": 0, 00:17:07.381 "w_mbytes_per_sec": 0 00:17:07.381 }, 00:17:07.381 "claimed": false, 00:17:07.381 "zoned": false, 00:17:07.381 "supported_io_types": { 00:17:07.381 "read": true, 00:17:07.381 "write": true, 00:17:07.381 "unmap": false, 00:17:07.381 "flush": false, 00:17:07.381 "reset": true, 00:17:07.381 "nvme_admin": false, 00:17:07.381 "nvme_io": false, 00:17:07.381 "nvme_io_md": false, 00:17:07.381 "write_zeroes": true, 00:17:07.381 "zcopy": false, 00:17:07.381 "get_zone_info": false, 00:17:07.381 "zone_management": false, 00:17:07.381 "zone_append": false, 00:17:07.381 "compare": false, 00:17:07.381 "compare_and_write": false, 00:17:07.381 "abort": false, 00:17:07.381 "seek_hole": false, 00:17:07.381 "seek_data": false, 00:17:07.381 "copy": false, 00:17:07.381 "nvme_iov_md": false 00:17:07.381 }, 00:17:07.381 "memory_domains": [ 00:17:07.381 { 00:17:07.381 "dma_device_id": "system", 00:17:07.381 "dma_device_type": 1 00:17:07.381 }, 00:17:07.381 { 00:17:07.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.381 "dma_device_type": 2 00:17:07.381 }, 00:17:07.381 { 00:17:07.381 "dma_device_id": "system", 00:17:07.381 "dma_device_type": 1 00:17:07.381 }, 00:17:07.381 { 00:17:07.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:07.381 "dma_device_type": 2 00:17:07.381 } 00:17:07.381 ], 00:17:07.381 "driver_specific": { 00:17:07.381 "raid": { 00:17:07.381 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:07.381 "strip_size_kb": 0, 00:17:07.381 "state": "online", 00:17:07.381 "raid_level": "raid1", 00:17:07.381 "superblock": true, 00:17:07.381 "num_base_bdevs": 2, 00:17:07.381 "num_base_bdevs_discovered": 2, 00:17:07.381 "num_base_bdevs_operational": 2, 00:17:07.381 "base_bdevs_list": [ 00:17:07.381 { 00:17:07.381 "name": "pt1", 00:17:07.381 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:07.381 "is_configured": true, 00:17:07.381 "data_offset": 256, 00:17:07.381 "data_size": 7936 00:17:07.381 }, 00:17:07.381 { 00:17:07.381 "name": "pt2", 00:17:07.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.381 "is_configured": true, 00:17:07.381 "data_offset": 256, 00:17:07.381 "data_size": 7936 00:17:07.381 } 00:17:07.381 ] 00:17:07.381 } 00:17:07.381 } 00:17:07.381 }' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:07.381 pt2' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.381 [2024-11-16 18:57:50.813808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:07.381 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' c8be51cd-1a55-43fc-929c-13ff5b905fdb '!=' c8be51cd-1a55-43fc-929c-13ff5b905fdb ']' 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.642 [2024-11-16 18:57:50.857518] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.642 "name": "raid_bdev1", 00:17:07.642 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:07.642 "strip_size_kb": 0, 00:17:07.642 "state": "online", 00:17:07.642 "raid_level": "raid1", 00:17:07.642 "superblock": true, 00:17:07.642 "num_base_bdevs": 2, 00:17:07.642 "num_base_bdevs_discovered": 1, 00:17:07.642 "num_base_bdevs_operational": 1, 00:17:07.642 "base_bdevs_list": [ 00:17:07.642 { 00:17:07.642 "name": null, 00:17:07.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.642 "is_configured": false, 00:17:07.642 "data_offset": 0, 00:17:07.642 "data_size": 7936 00:17:07.642 }, 00:17:07.642 { 00:17:07.642 "name": "pt2", 00:17:07.642 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.642 "is_configured": true, 00:17:07.642 "data_offset": 256, 00:17:07.642 "data_size": 7936 00:17:07.642 } 00:17:07.642 ] 00:17:07.642 }' 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.642 18:57:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.902 [2024-11-16 18:57:51.312742] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.902 [2024-11-16 18:57:51.312766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.902 [2024-11-16 18:57:51.312821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.902 [2024-11-16 18:57:51.312864] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.902 [2024-11-16 18:57:51.312879] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.902 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.163 [2024-11-16 18:57:51.384669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:08.163 [2024-11-16 18:57:51.384743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.163 [2024-11-16 18:57:51.384759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:08.163 [2024-11-16 18:57:51.384769] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.163 [2024-11-16 18:57:51.386565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.163 [2024-11-16 18:57:51.386603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:08.163 [2024-11-16 18:57:51.386644] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:08.163 [2024-11-16 18:57:51.386702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.163 [2024-11-16 18:57:51.386786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:08.163 [2024-11-16 18:57:51.386797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.163 [2024-11-16 18:57:51.386880] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:08.163 [2024-11-16 18:57:51.386983] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:08.163 [2024-11-16 18:57:51.386998] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:08.163 [2024-11-16 18:57:51.387098] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.163 pt2 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.163 "name": "raid_bdev1", 00:17:08.163 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:08.163 "strip_size_kb": 0, 00:17:08.163 "state": "online", 00:17:08.163 "raid_level": "raid1", 00:17:08.163 "superblock": true, 00:17:08.163 "num_base_bdevs": 2, 00:17:08.163 "num_base_bdevs_discovered": 1, 00:17:08.163 "num_base_bdevs_operational": 1, 00:17:08.163 "base_bdevs_list": [ 00:17:08.163 { 00:17:08.163 "name": null, 00:17:08.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.163 "is_configured": false, 00:17:08.163 "data_offset": 256, 00:17:08.163 "data_size": 7936 00:17:08.163 }, 00:17:08.163 { 00:17:08.163 "name": "pt2", 00:17:08.163 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.163 "is_configured": true, 00:17:08.163 "data_offset": 256, 00:17:08.163 "data_size": 7936 00:17:08.163 } 00:17:08.163 ] 00:17:08.163 }' 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.163 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 [2024-11-16 18:57:51.827851] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.424 [2024-11-16 18:57:51.827877] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.424 [2024-11-16 18:57:51.827922] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.424 [2024-11-16 18:57:51.827958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.424 [2024-11-16 18:57:51.827983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.424 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.424 [2024-11-16 18:57:51.887782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:08.424 [2024-11-16 18:57:51.887845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.424 [2024-11-16 18:57:51.887860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:08.424 [2024-11-16 18:57:51.887869] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.424 [2024-11-16 18:57:51.889749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.424 [2024-11-16 18:57:51.889779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:08.424 [2024-11-16 18:57:51.889823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:08.425 [2024-11-16 18:57:51.889858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.425 [2024-11-16 18:57:51.889966] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:08.425 [2024-11-16 18:57:51.889975] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.425 [2024-11-16 18:57:51.889989] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:08.425 [2024-11-16 18:57:51.890073] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.425 [2024-11-16 18:57:51.890147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:08.425 [2024-11-16 18:57:51.890155] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.425 [2024-11-16 18:57:51.890217] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:08.425 [2024-11-16 18:57:51.890323] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:08.425 [2024-11-16 18:57:51.890335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:08.425 [2024-11-16 18:57:51.890423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.425 pt1 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.425 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.710 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.710 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.710 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.710 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.710 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.711 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.711 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.711 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.711 "name": "raid_bdev1", 00:17:08.711 "uuid": "c8be51cd-1a55-43fc-929c-13ff5b905fdb", 00:17:08.711 "strip_size_kb": 0, 00:17:08.711 "state": "online", 00:17:08.711 "raid_level": "raid1", 00:17:08.711 "superblock": true, 00:17:08.711 "num_base_bdevs": 2, 00:17:08.711 "num_base_bdevs_discovered": 1, 00:17:08.711 "num_base_bdevs_operational": 1, 00:17:08.711 "base_bdevs_list": [ 00:17:08.711 { 00:17:08.711 "name": null, 00:17:08.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.711 "is_configured": false, 00:17:08.711 "data_offset": 256, 00:17:08.711 "data_size": 7936 00:17:08.711 }, 00:17:08.711 { 00:17:08.711 "name": "pt2", 00:17:08.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:08.711 "is_configured": true, 00:17:08.711 "data_offset": 256, 00:17:08.711 "data_size": 7936 00:17:08.711 } 00:17:08.711 ] 00:17:08.711 }' 00:17:08.711 18:57:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.711 18:57:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:08.979 [2024-11-16 18:57:52.367146] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' c8be51cd-1a55-43fc-929c-13ff5b905fdb '!=' c8be51cd-1a55-43fc-929c-13ff5b905fdb ']' 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87083 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87083 ']' 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87083 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87083 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:08.979 killing process with pid 87083 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87083' 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87083 00:17:08.979 [2024-11-16 18:57:52.440582] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:08.979 [2024-11-16 18:57:52.440669] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.979 [2024-11-16 18:57:52.440721] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.979 [2024-11-16 18:57:52.440739] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:08.979 18:57:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87083 00:17:09.239 [2024-11-16 18:57:52.646905] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.622 18:57:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:10.622 00:17:10.622 real 0m5.934s 00:17:10.622 user 0m9.001s 00:17:10.622 sys 0m1.118s 00:17:10.622 18:57:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.622 18:57:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.622 ************************************ 00:17:10.622 END TEST raid_superblock_test_md_separate 00:17:10.622 ************************************ 00:17:10.622 18:57:53 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:10.622 18:57:53 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:10.622 18:57:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:10.622 18:57:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.622 18:57:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.622 ************************************ 00:17:10.622 START TEST raid_rebuild_test_sb_md_separate 00:17:10.622 ************************************ 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87405 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87405 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87405 ']' 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:10.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:10.622 18:57:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.622 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:10.622 Zero copy mechanism will not be used. 00:17:10.622 [2024-11-16 18:57:53.855945] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:10.622 [2024-11-16 18:57:53.856063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87405 ] 00:17:10.622 [2024-11-16 18:57:54.030125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.882 [2024-11-16 18:57:54.137024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.882 [2024-11-16 18:57:54.326767] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:10.882 [2024-11-16 18:57:54.326808] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.452 BaseBdev1_malloc 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.452 [2024-11-16 18:57:54.715442] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:11.452 [2024-11-16 18:57:54.715504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.452 [2024-11-16 18:57:54.715542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:11.452 [2024-11-16 18:57:54.715553] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.452 [2024-11-16 18:57:54.717406] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.452 [2024-11-16 18:57:54.717446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:11.452 BaseBdev1 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.452 BaseBdev2_malloc 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.452 [2024-11-16 18:57:54.768980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:11.452 [2024-11-16 18:57:54.769044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.452 [2024-11-16 18:57:54.769062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:11.452 [2024-11-16 18:57:54.769073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.452 [2024-11-16 18:57:54.770848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.452 [2024-11-16 18:57:54.770884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:11.452 BaseBdev2 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.452 spare_malloc 00:17:11.452 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.453 spare_delay 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.453 [2024-11-16 18:57:54.869690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.453 [2024-11-16 18:57:54.869765] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.453 [2024-11-16 18:57:54.869784] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:11.453 [2024-11-16 18:57:54.869795] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.453 [2024-11-16 18:57:54.871632] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.453 [2024-11-16 18:57:54.871680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.453 spare 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.453 [2024-11-16 18:57:54.881707] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:11.453 [2024-11-16 18:57:54.883475] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:11.453 [2024-11-16 18:57:54.883662] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:11.453 [2024-11-16 18:57:54.883678] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:11.453 [2024-11-16 18:57:54.883746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:11.453 [2024-11-16 18:57:54.883866] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:11.453 [2024-11-16 18:57:54.883878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:11.453 [2024-11-16 18:57:54.883970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:11.453 "name": "raid_bdev1", 00:17:11.453 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:11.453 "strip_size_kb": 0, 00:17:11.453 "state": "online", 00:17:11.453 "raid_level": "raid1", 00:17:11.453 "superblock": true, 00:17:11.453 "num_base_bdevs": 2, 00:17:11.453 "num_base_bdevs_discovered": 2, 00:17:11.453 "num_base_bdevs_operational": 2, 00:17:11.453 "base_bdevs_list": [ 00:17:11.453 { 00:17:11.453 "name": "BaseBdev1", 00:17:11.453 "uuid": "bc126f34-7bb6-5640-b1f4-e0fc20d1802a", 00:17:11.453 "is_configured": true, 00:17:11.453 "data_offset": 256, 00:17:11.453 "data_size": 7936 00:17:11.453 }, 00:17:11.453 { 00:17:11.453 "name": "BaseBdev2", 00:17:11.453 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:11.453 "is_configured": true, 00:17:11.453 "data_offset": 256, 00:17:11.453 "data_size": 7936 00:17:11.453 } 00:17:11.453 ] 00:17:11.453 }' 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:11.453 18:57:54 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.023 [2024-11-16 18:57:55.277231] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.023 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:12.283 [2024-11-16 18:57:55.536739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:12.283 /dev/nbd0 00:17:12.283 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:12.283 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:12.283 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:12.283 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:12.283 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:12.283 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:12.283 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:12.284 1+0 records in 00:17:12.284 1+0 records out 00:17:12.284 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397987 s, 10.3 MB/s 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:12.284 18:57:55 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:12.853 7936+0 records in 00:17:12.853 7936+0 records out 00:17:12.853 32505856 bytes (33 MB, 31 MiB) copied, 0.601426 s, 54.0 MB/s 00:17:12.853 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:12.853 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:12.853 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:12.854 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:12.854 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:12.854 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:12.854 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:13.114 [2024-11-16 18:57:56.428692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.114 [2024-11-16 18:57:56.453620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.114 "name": "raid_bdev1", 00:17:13.114 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:13.114 "strip_size_kb": 0, 00:17:13.114 "state": "online", 00:17:13.114 "raid_level": "raid1", 00:17:13.114 "superblock": true, 00:17:13.114 "num_base_bdevs": 2, 00:17:13.114 "num_base_bdevs_discovered": 1, 00:17:13.114 "num_base_bdevs_operational": 1, 00:17:13.114 "base_bdevs_list": [ 00:17:13.114 { 00:17:13.114 "name": null, 00:17:13.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.114 "is_configured": false, 00:17:13.114 "data_offset": 0, 00:17:13.114 "data_size": 7936 00:17:13.114 }, 00:17:13.114 { 00:17:13.114 "name": "BaseBdev2", 00:17:13.114 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:13.114 "is_configured": true, 00:17:13.114 "data_offset": 256, 00:17:13.114 "data_size": 7936 00:17:13.114 } 00:17:13.114 ] 00:17:13.114 }' 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.114 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.684 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:13.684 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.684 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.684 [2024-11-16 18:57:56.956764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:13.684 [2024-11-16 18:57:56.970078] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:13.684 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.684 18:57:56 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:13.684 [2024-11-16 18:57:56.971874] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.624 18:57:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.624 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.624 "name": "raid_bdev1", 00:17:14.624 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:14.624 "strip_size_kb": 0, 00:17:14.624 "state": "online", 00:17:14.624 "raid_level": "raid1", 00:17:14.624 "superblock": true, 00:17:14.624 "num_base_bdevs": 2, 00:17:14.624 "num_base_bdevs_discovered": 2, 00:17:14.624 "num_base_bdevs_operational": 2, 00:17:14.624 "process": { 00:17:14.624 "type": "rebuild", 00:17:14.624 "target": "spare", 00:17:14.624 "progress": { 00:17:14.624 "blocks": 2560, 00:17:14.624 "percent": 32 00:17:14.624 } 00:17:14.624 }, 00:17:14.624 "base_bdevs_list": [ 00:17:14.624 { 00:17:14.624 "name": "spare", 00:17:14.624 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:14.624 "is_configured": true, 00:17:14.624 "data_offset": 256, 00:17:14.624 "data_size": 7936 00:17:14.624 }, 00:17:14.624 { 00:17:14.624 "name": "BaseBdev2", 00:17:14.624 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:14.624 "is_configured": true, 00:17:14.624 "data_offset": 256, 00:17:14.624 "data_size": 7936 00:17:14.624 } 00:17:14.624 ] 00:17:14.624 }' 00:17:14.624 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.624 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:14.624 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.885 [2024-11-16 18:57:58.132513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.885 [2024-11-16 18:57:58.176423] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:14.885 [2024-11-16 18:57:58.176499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.885 [2024-11-16 18:57:58.176513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:14.885 [2024-11-16 18:57:58.176522] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.885 "name": "raid_bdev1", 00:17:14.885 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:14.885 "strip_size_kb": 0, 00:17:14.885 "state": "online", 00:17:14.885 "raid_level": "raid1", 00:17:14.885 "superblock": true, 00:17:14.885 "num_base_bdevs": 2, 00:17:14.885 "num_base_bdevs_discovered": 1, 00:17:14.885 "num_base_bdevs_operational": 1, 00:17:14.885 "base_bdevs_list": [ 00:17:14.885 { 00:17:14.885 "name": null, 00:17:14.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.885 "is_configured": false, 00:17:14.885 "data_offset": 0, 00:17:14.885 "data_size": 7936 00:17:14.885 }, 00:17:14.885 { 00:17:14.885 "name": "BaseBdev2", 00:17:14.885 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:14.885 "is_configured": true, 00:17:14.885 "data_offset": 256, 00:17:14.885 "data_size": 7936 00:17:14.885 } 00:17:14.885 ] 00:17:14.885 }' 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.885 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.455 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:15.455 "name": "raid_bdev1", 00:17:15.455 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:15.455 "strip_size_kb": 0, 00:17:15.455 "state": "online", 00:17:15.455 "raid_level": "raid1", 00:17:15.455 "superblock": true, 00:17:15.455 "num_base_bdevs": 2, 00:17:15.455 "num_base_bdevs_discovered": 1, 00:17:15.455 "num_base_bdevs_operational": 1, 00:17:15.455 "base_bdevs_list": [ 00:17:15.455 { 00:17:15.455 "name": null, 00:17:15.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.455 "is_configured": false, 00:17:15.455 "data_offset": 0, 00:17:15.455 "data_size": 7936 00:17:15.455 }, 00:17:15.455 { 00:17:15.455 "name": "BaseBdev2", 00:17:15.455 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:15.455 "is_configured": true, 00:17:15.455 "data_offset": 256, 00:17:15.455 "data_size": 7936 00:17:15.455 } 00:17:15.455 ] 00:17:15.455 }' 00:17:15.456 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:15.456 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:15.456 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:15.456 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:15.456 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:15.456 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.456 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:15.456 [2024-11-16 18:57:58.762924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:15.456 [2024-11-16 18:57:58.775858] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:15.456 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.456 18:57:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:15.456 [2024-11-16 18:57:58.777605] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.395 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.395 "name": "raid_bdev1", 00:17:16.395 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:16.395 "strip_size_kb": 0, 00:17:16.395 "state": "online", 00:17:16.395 "raid_level": "raid1", 00:17:16.395 "superblock": true, 00:17:16.395 "num_base_bdevs": 2, 00:17:16.395 "num_base_bdevs_discovered": 2, 00:17:16.395 "num_base_bdevs_operational": 2, 00:17:16.395 "process": { 00:17:16.395 "type": "rebuild", 00:17:16.395 "target": "spare", 00:17:16.395 "progress": { 00:17:16.396 "blocks": 2560, 00:17:16.396 "percent": 32 00:17:16.396 } 00:17:16.396 }, 00:17:16.396 "base_bdevs_list": [ 00:17:16.396 { 00:17:16.396 "name": "spare", 00:17:16.396 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:16.396 "is_configured": true, 00:17:16.396 "data_offset": 256, 00:17:16.396 "data_size": 7936 00:17:16.396 }, 00:17:16.396 { 00:17:16.396 "name": "BaseBdev2", 00:17:16.396 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:16.396 "is_configured": true, 00:17:16.396 "data_offset": 256, 00:17:16.396 "data_size": 7936 00:17:16.396 } 00:17:16.396 ] 00:17:16.396 }' 00:17:16.396 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:16.656 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=681 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.656 "name": "raid_bdev1", 00:17:16.656 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:16.656 "strip_size_kb": 0, 00:17:16.656 "state": "online", 00:17:16.656 "raid_level": "raid1", 00:17:16.656 "superblock": true, 00:17:16.656 "num_base_bdevs": 2, 00:17:16.656 "num_base_bdevs_discovered": 2, 00:17:16.656 "num_base_bdevs_operational": 2, 00:17:16.656 "process": { 00:17:16.656 "type": "rebuild", 00:17:16.656 "target": "spare", 00:17:16.656 "progress": { 00:17:16.656 "blocks": 2816, 00:17:16.656 "percent": 35 00:17:16.656 } 00:17:16.656 }, 00:17:16.656 "base_bdevs_list": [ 00:17:16.656 { 00:17:16.656 "name": "spare", 00:17:16.656 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:16.656 "is_configured": true, 00:17:16.656 "data_offset": 256, 00:17:16.656 "data_size": 7936 00:17:16.656 }, 00:17:16.656 { 00:17:16.656 "name": "BaseBdev2", 00:17:16.656 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:16.656 "is_configured": true, 00:17:16.656 "data_offset": 256, 00:17:16.656 "data_size": 7936 00:17:16.656 } 00:17:16.656 ] 00:17:16.656 }' 00:17:16.656 18:57:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.656 18:58:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:16.656 18:58:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.656 18:58:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:16.656 18:58:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.036 "name": "raid_bdev1", 00:17:18.036 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:18.036 "strip_size_kb": 0, 00:17:18.036 "state": "online", 00:17:18.036 "raid_level": "raid1", 00:17:18.036 "superblock": true, 00:17:18.036 "num_base_bdevs": 2, 00:17:18.036 "num_base_bdevs_discovered": 2, 00:17:18.036 "num_base_bdevs_operational": 2, 00:17:18.036 "process": { 00:17:18.036 "type": "rebuild", 00:17:18.036 "target": "spare", 00:17:18.036 "progress": { 00:17:18.036 "blocks": 5888, 00:17:18.036 "percent": 74 00:17:18.036 } 00:17:18.036 }, 00:17:18.036 "base_bdevs_list": [ 00:17:18.036 { 00:17:18.036 "name": "spare", 00:17:18.036 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:18.036 "is_configured": true, 00:17:18.036 "data_offset": 256, 00:17:18.036 "data_size": 7936 00:17:18.036 }, 00:17:18.036 { 00:17:18.036 "name": "BaseBdev2", 00:17:18.036 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:18.036 "is_configured": true, 00:17:18.036 "data_offset": 256, 00:17:18.036 "data_size": 7936 00:17:18.036 } 00:17:18.036 ] 00:17:18.036 }' 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:18.036 18:58:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:18.604 [2024-11-16 18:58:01.888765] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:18.605 [2024-11-16 18:58:01.888829] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:18.605 [2024-11-16 18:58:01.888918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:18.864 "name": "raid_bdev1", 00:17:18.864 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:18.864 "strip_size_kb": 0, 00:17:18.864 "state": "online", 00:17:18.864 "raid_level": "raid1", 00:17:18.864 "superblock": true, 00:17:18.864 "num_base_bdevs": 2, 00:17:18.864 "num_base_bdevs_discovered": 2, 00:17:18.864 "num_base_bdevs_operational": 2, 00:17:18.864 "base_bdevs_list": [ 00:17:18.864 { 00:17:18.864 "name": "spare", 00:17:18.864 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:18.864 "is_configured": true, 00:17:18.864 "data_offset": 256, 00:17:18.864 "data_size": 7936 00:17:18.864 }, 00:17:18.864 { 00:17:18.864 "name": "BaseBdev2", 00:17:18.864 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:18.864 "is_configured": true, 00:17:18.864 "data_offset": 256, 00:17:18.864 "data_size": 7936 00:17:18.864 } 00:17:18.864 ] 00:17:18.864 }' 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:18.864 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:19.123 "name": "raid_bdev1", 00:17:19.123 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:19.123 "strip_size_kb": 0, 00:17:19.123 "state": "online", 00:17:19.123 "raid_level": "raid1", 00:17:19.123 "superblock": true, 00:17:19.123 "num_base_bdevs": 2, 00:17:19.123 "num_base_bdevs_discovered": 2, 00:17:19.123 "num_base_bdevs_operational": 2, 00:17:19.123 "base_bdevs_list": [ 00:17:19.123 { 00:17:19.123 "name": "spare", 00:17:19.123 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:19.123 "is_configured": true, 00:17:19.123 "data_offset": 256, 00:17:19.123 "data_size": 7936 00:17:19.123 }, 00:17:19.123 { 00:17:19.123 "name": "BaseBdev2", 00:17:19.123 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:19.123 "is_configured": true, 00:17:19.123 "data_offset": 256, 00:17:19.123 "data_size": 7936 00:17:19.123 } 00:17:19.123 ] 00:17:19.123 }' 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:19.123 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.124 "name": "raid_bdev1", 00:17:19.124 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:19.124 "strip_size_kb": 0, 00:17:19.124 "state": "online", 00:17:19.124 "raid_level": "raid1", 00:17:19.124 "superblock": true, 00:17:19.124 "num_base_bdevs": 2, 00:17:19.124 "num_base_bdevs_discovered": 2, 00:17:19.124 "num_base_bdevs_operational": 2, 00:17:19.124 "base_bdevs_list": [ 00:17:19.124 { 00:17:19.124 "name": "spare", 00:17:19.124 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:19.124 "is_configured": true, 00:17:19.124 "data_offset": 256, 00:17:19.124 "data_size": 7936 00:17:19.124 }, 00:17:19.124 { 00:17:19.124 "name": "BaseBdev2", 00:17:19.124 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:19.124 "is_configured": true, 00:17:19.124 "data_offset": 256, 00:17:19.124 "data_size": 7936 00:17:19.124 } 00:17:19.124 ] 00:17:19.124 }' 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.124 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.692 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:19.692 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.692 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.692 [2024-11-16 18:58:02.957132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:19.693 [2024-11-16 18:58:02.957214] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:19.693 [2024-11-16 18:58:02.957324] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:19.693 [2024-11-16 18:58:02.957400] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:19.693 [2024-11-16 18:58:02.957432] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:19.693 18:58:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:19.693 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:19.693 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:17:19.693 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:19.693 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.693 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:19.952 /dev/nbd0 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:19.952 1+0 records in 00:17:19.952 1+0 records out 00:17:19.952 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368682 s, 11.1 MB/s 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:19.952 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:20.212 /dev/nbd1 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:20.212 1+0 records in 00:17:20.212 1+0 records out 00:17:20.212 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558314 s, 7.3 MB/s 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:20.212 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:20.472 18:58:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.732 [2024-11-16 18:58:04.144590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:20.732 [2024-11-16 18:58:04.144723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.732 [2024-11-16 18:58:04.144764] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:20.732 [2024-11-16 18:58:04.144794] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.732 [2024-11-16 18:58:04.146738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.732 [2024-11-16 18:58:04.146822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:20.732 [2024-11-16 18:58:04.146931] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:20.732 [2024-11-16 18:58:04.147014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:20.732 [2024-11-16 18:58:04.147190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.732 spare 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.732 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.992 [2024-11-16 18:58:04.247107] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:20.992 [2024-11-16 18:58:04.247177] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.992 [2024-11-16 18:58:04.247279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:20.992 [2024-11-16 18:58:04.247443] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:20.992 [2024-11-16 18:58:04.247478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:20.992 [2024-11-16 18:58:04.247635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.992 "name": "raid_bdev1", 00:17:20.992 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:20.992 "strip_size_kb": 0, 00:17:20.992 "state": "online", 00:17:20.992 "raid_level": "raid1", 00:17:20.992 "superblock": true, 00:17:20.992 "num_base_bdevs": 2, 00:17:20.992 "num_base_bdevs_discovered": 2, 00:17:20.992 "num_base_bdevs_operational": 2, 00:17:20.992 "base_bdevs_list": [ 00:17:20.992 { 00:17:20.992 "name": "spare", 00:17:20.992 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:20.992 "is_configured": true, 00:17:20.992 "data_offset": 256, 00:17:20.992 "data_size": 7936 00:17:20.992 }, 00:17:20.992 { 00:17:20.992 "name": "BaseBdev2", 00:17:20.992 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:20.992 "is_configured": true, 00:17:20.992 "data_offset": 256, 00:17:20.992 "data_size": 7936 00:17:20.992 } 00:17:20.992 ] 00:17:20.992 }' 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.992 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.253 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:21.253 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:21.253 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:21.253 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:21.253 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:21.253 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.253 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.253 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.253 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:21.513 "name": "raid_bdev1", 00:17:21.513 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:21.513 "strip_size_kb": 0, 00:17:21.513 "state": "online", 00:17:21.513 "raid_level": "raid1", 00:17:21.513 "superblock": true, 00:17:21.513 "num_base_bdevs": 2, 00:17:21.513 "num_base_bdevs_discovered": 2, 00:17:21.513 "num_base_bdevs_operational": 2, 00:17:21.513 "base_bdevs_list": [ 00:17:21.513 { 00:17:21.513 "name": "spare", 00:17:21.513 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:21.513 "is_configured": true, 00:17:21.513 "data_offset": 256, 00:17:21.513 "data_size": 7936 00:17:21.513 }, 00:17:21.513 { 00:17:21.513 "name": "BaseBdev2", 00:17:21.513 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:21.513 "is_configured": true, 00:17:21.513 "data_offset": 256, 00:17:21.513 "data_size": 7936 00:17:21.513 } 00:17:21.513 ] 00:17:21.513 }' 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.513 [2024-11-16 18:58:04.875341] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.513 "name": "raid_bdev1", 00:17:21.513 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:21.513 "strip_size_kb": 0, 00:17:21.513 "state": "online", 00:17:21.513 "raid_level": "raid1", 00:17:21.513 "superblock": true, 00:17:21.513 "num_base_bdevs": 2, 00:17:21.513 "num_base_bdevs_discovered": 1, 00:17:21.513 "num_base_bdevs_operational": 1, 00:17:21.513 "base_bdevs_list": [ 00:17:21.513 { 00:17:21.513 "name": null, 00:17:21.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.513 "is_configured": false, 00:17:21.513 "data_offset": 0, 00:17:21.513 "data_size": 7936 00:17:21.513 }, 00:17:21.513 { 00:17:21.513 "name": "BaseBdev2", 00:17:21.513 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:21.513 "is_configured": true, 00:17:21.513 "data_offset": 256, 00:17:21.513 "data_size": 7936 00:17:21.513 } 00:17:21.513 ] 00:17:21.513 }' 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.513 18:58:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.084 18:58:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:22.084 18:58:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.084 18:58:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:22.084 [2024-11-16 18:58:05.310621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.084 [2024-11-16 18:58:05.310809] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:22.084 [2024-11-16 18:58:05.310892] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:22.084 [2024-11-16 18:58:05.310958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:22.084 [2024-11-16 18:58:05.324266] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:22.084 18:58:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.084 18:58:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:22.084 [2024-11-16 18:58:05.326050] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:23.024 "name": "raid_bdev1", 00:17:23.024 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:23.024 "strip_size_kb": 0, 00:17:23.024 "state": "online", 00:17:23.024 "raid_level": "raid1", 00:17:23.024 "superblock": true, 00:17:23.024 "num_base_bdevs": 2, 00:17:23.024 "num_base_bdevs_discovered": 2, 00:17:23.024 "num_base_bdevs_operational": 2, 00:17:23.024 "process": { 00:17:23.024 "type": "rebuild", 00:17:23.024 "target": "spare", 00:17:23.024 "progress": { 00:17:23.024 "blocks": 2560, 00:17:23.024 "percent": 32 00:17:23.024 } 00:17:23.024 }, 00:17:23.024 "base_bdevs_list": [ 00:17:23.024 { 00:17:23.024 "name": "spare", 00:17:23.024 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:23.024 "is_configured": true, 00:17:23.024 "data_offset": 256, 00:17:23.024 "data_size": 7936 00:17:23.024 }, 00:17:23.024 { 00:17:23.024 "name": "BaseBdev2", 00:17:23.024 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:23.024 "is_configured": true, 00:17:23.024 "data_offset": 256, 00:17:23.024 "data_size": 7936 00:17:23.024 } 00:17:23.024 ] 00:17:23.024 }' 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:23.024 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:23.025 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:23.025 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:23.025 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:23.025 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.025 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.025 [2024-11-16 18:58:06.486158] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.285 [2024-11-16 18:58:06.530552] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:23.285 [2024-11-16 18:58:06.530688] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.285 [2024-11-16 18:58:06.530704] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:23.285 [2024-11-16 18:58:06.530723] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.285 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.286 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.286 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.286 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.286 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.286 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.286 "name": "raid_bdev1", 00:17:23.286 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:23.286 "strip_size_kb": 0, 00:17:23.286 "state": "online", 00:17:23.286 "raid_level": "raid1", 00:17:23.286 "superblock": true, 00:17:23.286 "num_base_bdevs": 2, 00:17:23.286 "num_base_bdevs_discovered": 1, 00:17:23.286 "num_base_bdevs_operational": 1, 00:17:23.286 "base_bdevs_list": [ 00:17:23.286 { 00:17:23.286 "name": null, 00:17:23.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.286 "is_configured": false, 00:17:23.286 "data_offset": 0, 00:17:23.286 "data_size": 7936 00:17:23.286 }, 00:17:23.286 { 00:17:23.286 "name": "BaseBdev2", 00:17:23.286 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:23.286 "is_configured": true, 00:17:23.286 "data_offset": 256, 00:17:23.286 "data_size": 7936 00:17:23.286 } 00:17:23.286 ] 00:17:23.286 }' 00:17:23.286 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.286 18:58:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.546 18:58:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:23.546 18:58:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.546 18:58:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:23.546 [2024-11-16 18:58:07.012773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:23.546 [2024-11-16 18:58:07.012876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.546 [2024-11-16 18:58:07.012932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:23.546 [2024-11-16 18:58:07.012974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.546 [2024-11-16 18:58:07.013216] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.546 [2024-11-16 18:58:07.013271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:23.546 [2024-11-16 18:58:07.013346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:23.546 [2024-11-16 18:58:07.013385] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:23.546 [2024-11-16 18:58:07.013439] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:23.546 [2024-11-16 18:58:07.013488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.806 [2024-11-16 18:58:07.026232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:23.806 spare 00:17:23.806 18:58:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.806 [2024-11-16 18:58:07.027994] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:23.806 18:58:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.746 "name": "raid_bdev1", 00:17:24.746 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:24.746 "strip_size_kb": 0, 00:17:24.746 "state": "online", 00:17:24.746 "raid_level": "raid1", 00:17:24.746 "superblock": true, 00:17:24.746 "num_base_bdevs": 2, 00:17:24.746 "num_base_bdevs_discovered": 2, 00:17:24.746 "num_base_bdevs_operational": 2, 00:17:24.746 "process": { 00:17:24.746 "type": "rebuild", 00:17:24.746 "target": "spare", 00:17:24.746 "progress": { 00:17:24.746 "blocks": 2560, 00:17:24.746 "percent": 32 00:17:24.746 } 00:17:24.746 }, 00:17:24.746 "base_bdevs_list": [ 00:17:24.746 { 00:17:24.746 "name": "spare", 00:17:24.746 "uuid": "b7f2d85b-64e2-5cb5-b097-8b258ca0ec00", 00:17:24.746 "is_configured": true, 00:17:24.746 "data_offset": 256, 00:17:24.746 "data_size": 7936 00:17:24.746 }, 00:17:24.746 { 00:17:24.746 "name": "BaseBdev2", 00:17:24.746 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:24.746 "is_configured": true, 00:17:24.746 "data_offset": 256, 00:17:24.746 "data_size": 7936 00:17:24.746 } 00:17:24.746 ] 00:17:24.746 }' 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.746 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.746 [2024-11-16 18:58:08.192611] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.006 [2024-11-16 18:58:08.232465] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:25.006 [2024-11-16 18:58:08.232524] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.006 [2024-11-16 18:58:08.232540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:25.006 [2024-11-16 18:58:08.232547] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.006 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.006 "name": "raid_bdev1", 00:17:25.006 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:25.006 "strip_size_kb": 0, 00:17:25.006 "state": "online", 00:17:25.006 "raid_level": "raid1", 00:17:25.006 "superblock": true, 00:17:25.006 "num_base_bdevs": 2, 00:17:25.006 "num_base_bdevs_discovered": 1, 00:17:25.006 "num_base_bdevs_operational": 1, 00:17:25.006 "base_bdevs_list": [ 00:17:25.006 { 00:17:25.006 "name": null, 00:17:25.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.006 "is_configured": false, 00:17:25.006 "data_offset": 0, 00:17:25.006 "data_size": 7936 00:17:25.006 }, 00:17:25.006 { 00:17:25.006 "name": "BaseBdev2", 00:17:25.006 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:25.006 "is_configured": true, 00:17:25.006 "data_offset": 256, 00:17:25.006 "data_size": 7936 00:17:25.006 } 00:17:25.006 ] 00:17:25.006 }' 00:17:25.007 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.007 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.267 "name": "raid_bdev1", 00:17:25.267 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:25.267 "strip_size_kb": 0, 00:17:25.267 "state": "online", 00:17:25.267 "raid_level": "raid1", 00:17:25.267 "superblock": true, 00:17:25.267 "num_base_bdevs": 2, 00:17:25.267 "num_base_bdevs_discovered": 1, 00:17:25.267 "num_base_bdevs_operational": 1, 00:17:25.267 "base_bdevs_list": [ 00:17:25.267 { 00:17:25.267 "name": null, 00:17:25.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.267 "is_configured": false, 00:17:25.267 "data_offset": 0, 00:17:25.267 "data_size": 7936 00:17:25.267 }, 00:17:25.267 { 00:17:25.267 "name": "BaseBdev2", 00:17:25.267 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:25.267 "is_configured": true, 00:17:25.267 "data_offset": 256, 00:17:25.267 "data_size": 7936 00:17:25.267 } 00:17:25.267 ] 00:17:25.267 }' 00:17:25.267 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.527 [2024-11-16 18:58:08.818491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:25.527 [2024-11-16 18:58:08.818601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.527 [2024-11-16 18:58:08.818641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:25.527 [2024-11-16 18:58:08.818689] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.527 [2024-11-16 18:58:08.818891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.527 [2024-11-16 18:58:08.818937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:25.527 [2024-11-16 18:58:08.819011] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:25.527 [2024-11-16 18:58:08.819049] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:25.527 [2024-11-16 18:58:08.819087] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:25.527 [2024-11-16 18:58:08.819130] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:25.527 BaseBdev1 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.527 18:58:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.471 "name": "raid_bdev1", 00:17:26.471 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:26.471 "strip_size_kb": 0, 00:17:26.471 "state": "online", 00:17:26.471 "raid_level": "raid1", 00:17:26.471 "superblock": true, 00:17:26.471 "num_base_bdevs": 2, 00:17:26.471 "num_base_bdevs_discovered": 1, 00:17:26.471 "num_base_bdevs_operational": 1, 00:17:26.471 "base_bdevs_list": [ 00:17:26.471 { 00:17:26.471 "name": null, 00:17:26.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.471 "is_configured": false, 00:17:26.471 "data_offset": 0, 00:17:26.471 "data_size": 7936 00:17:26.471 }, 00:17:26.471 { 00:17:26.471 "name": "BaseBdev2", 00:17:26.471 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:26.471 "is_configured": true, 00:17:26.471 "data_offset": 256, 00:17:26.471 "data_size": 7936 00:17:26.471 } 00:17:26.471 ] 00:17:26.471 }' 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.471 18:58:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.041 "name": "raid_bdev1", 00:17:27.041 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:27.041 "strip_size_kb": 0, 00:17:27.041 "state": "online", 00:17:27.041 "raid_level": "raid1", 00:17:27.041 "superblock": true, 00:17:27.041 "num_base_bdevs": 2, 00:17:27.041 "num_base_bdevs_discovered": 1, 00:17:27.041 "num_base_bdevs_operational": 1, 00:17:27.041 "base_bdevs_list": [ 00:17:27.041 { 00:17:27.041 "name": null, 00:17:27.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.041 "is_configured": false, 00:17:27.041 "data_offset": 0, 00:17:27.041 "data_size": 7936 00:17:27.041 }, 00:17:27.041 { 00:17:27.041 "name": "BaseBdev2", 00:17:27.041 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:27.041 "is_configured": true, 00:17:27.041 "data_offset": 256, 00:17:27.041 "data_size": 7936 00:17:27.041 } 00:17:27.041 ] 00:17:27.041 }' 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.041 [2024-11-16 18:58:10.439719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.041 [2024-11-16 18:58:10.439879] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:27.041 [2024-11-16 18:58:10.439955] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:27.041 request: 00:17:27.041 { 00:17:27.041 "base_bdev": "BaseBdev1", 00:17:27.041 "raid_bdev": "raid_bdev1", 00:17:27.041 "method": "bdev_raid_add_base_bdev", 00:17:27.041 "req_id": 1 00:17:27.041 } 00:17:27.041 Got JSON-RPC error response 00:17:27.041 response: 00:17:27.041 { 00:17:27.041 "code": -22, 00:17:27.041 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:27.041 } 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:27.041 18:58:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.424 "name": "raid_bdev1", 00:17:28.424 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:28.424 "strip_size_kb": 0, 00:17:28.424 "state": "online", 00:17:28.424 "raid_level": "raid1", 00:17:28.424 "superblock": true, 00:17:28.424 "num_base_bdevs": 2, 00:17:28.424 "num_base_bdevs_discovered": 1, 00:17:28.424 "num_base_bdevs_operational": 1, 00:17:28.424 "base_bdevs_list": [ 00:17:28.424 { 00:17:28.424 "name": null, 00:17:28.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.424 "is_configured": false, 00:17:28.424 "data_offset": 0, 00:17:28.424 "data_size": 7936 00:17:28.424 }, 00:17:28.424 { 00:17:28.424 "name": "BaseBdev2", 00:17:28.424 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:28.424 "is_configured": true, 00:17:28.424 "data_offset": 256, 00:17:28.424 "data_size": 7936 00:17:28.424 } 00:17:28.424 ] 00:17:28.424 }' 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.424 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.685 "name": "raid_bdev1", 00:17:28.685 "uuid": "942e9755-3d87-46f2-a015-6adf9a72d256", 00:17:28.685 "strip_size_kb": 0, 00:17:28.685 "state": "online", 00:17:28.685 "raid_level": "raid1", 00:17:28.685 "superblock": true, 00:17:28.685 "num_base_bdevs": 2, 00:17:28.685 "num_base_bdevs_discovered": 1, 00:17:28.685 "num_base_bdevs_operational": 1, 00:17:28.685 "base_bdevs_list": [ 00:17:28.685 { 00:17:28.685 "name": null, 00:17:28.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.685 "is_configured": false, 00:17:28.685 "data_offset": 0, 00:17:28.685 "data_size": 7936 00:17:28.685 }, 00:17:28.685 { 00:17:28.685 "name": "BaseBdev2", 00:17:28.685 "uuid": "c02f2f80-cd92-50c3-af75-0a6736888fb0", 00:17:28.685 "is_configured": true, 00:17:28.685 "data_offset": 256, 00:17:28.685 "data_size": 7936 00:17:28.685 } 00:17:28.685 ] 00:17:28.685 }' 00:17:28.685 18:58:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87405 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87405 ']' 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87405 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87405 00:17:28.685 killing process with pid 87405 00:17:28.685 Received shutdown signal, test time was about 60.000000 seconds 00:17:28.685 00:17:28.685 Latency(us) 00:17:28.685 [2024-11-16T18:58:12.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.685 [2024-11-16T18:58:12.157Z] =================================================================================================================== 00:17:28.685 [2024-11-16T18:58:12.157Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:28.685 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:28.686 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87405' 00:17:28.686 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87405 00:17:28.686 [2024-11-16 18:58:12.082315] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:28.686 [2024-11-16 18:58:12.082405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.686 [2024-11-16 18:58:12.082444] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.686 [2024-11-16 18:58:12.082455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:28.686 18:58:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87405 00:17:28.945 [2024-11-16 18:58:12.378891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:30.352 18:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:17:30.352 00:17:30.352 real 0m19.636s 00:17:30.352 user 0m25.782s 00:17:30.352 sys 0m2.563s 00:17:30.352 18:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:30.352 18:58:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.352 ************************************ 00:17:30.352 END TEST raid_rebuild_test_sb_md_separate 00:17:30.352 ************************************ 00:17:30.352 18:58:13 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:17:30.352 18:58:13 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:30.352 18:58:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:30.352 18:58:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:30.352 18:58:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:30.352 ************************************ 00:17:30.352 START TEST raid_state_function_test_sb_md_interleaved 00:17:30.352 ************************************ 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:30.352 Process raid pid: 88097 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88097 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88097' 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88097 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88097 ']' 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.352 18:58:13 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.352 [2024-11-16 18:58:13.562133] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:30.352 [2024-11-16 18:58:13.562352] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.352 [2024-11-16 18:58:13.736917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.651 [2024-11-16 18:58:13.844864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.651 [2024-11-16 18:58:14.041113] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.651 [2024-11-16 18:58:14.041146] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:30.929 [2024-11-16 18:58:14.376962] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:30.929 [2024-11-16 18:58:14.377069] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:30.929 [2024-11-16 18:58:14.377113] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:30.929 [2024-11-16 18:58:14.377136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:30.929 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.189 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.189 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.189 "name": "Existed_Raid", 00:17:31.189 "uuid": "60dd341a-9112-433c-9711-63d36e468b61", 00:17:31.189 "strip_size_kb": 0, 00:17:31.189 "state": "configuring", 00:17:31.189 "raid_level": "raid1", 00:17:31.189 "superblock": true, 00:17:31.189 "num_base_bdevs": 2, 00:17:31.189 "num_base_bdevs_discovered": 0, 00:17:31.189 "num_base_bdevs_operational": 2, 00:17:31.189 "base_bdevs_list": [ 00:17:31.189 { 00:17:31.189 "name": "BaseBdev1", 00:17:31.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.189 "is_configured": false, 00:17:31.189 "data_offset": 0, 00:17:31.189 "data_size": 0 00:17:31.189 }, 00:17:31.189 { 00:17:31.189 "name": "BaseBdev2", 00:17:31.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.189 "is_configured": false, 00:17:31.189 "data_offset": 0, 00:17:31.189 "data_size": 0 00:17:31.189 } 00:17:31.189 ] 00:17:31.189 }' 00:17:31.189 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.189 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.449 [2024-11-16 18:58:14.784185] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:31.449 [2024-11-16 18:58:14.784264] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.449 [2024-11-16 18:58:14.796176] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:31.449 [2024-11-16 18:58:14.796257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:31.449 [2024-11-16 18:58:14.796297] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:31.449 [2024-11-16 18:58:14.796321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.449 [2024-11-16 18:58:14.841354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:31.449 BaseBdev1 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:31.449 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.450 [ 00:17:31.450 { 00:17:31.450 "name": "BaseBdev1", 00:17:31.450 "aliases": [ 00:17:31.450 "2bb10589-018b-4697-9d2b-f4879f3a88b7" 00:17:31.450 ], 00:17:31.450 "product_name": "Malloc disk", 00:17:31.450 "block_size": 4128, 00:17:31.450 "num_blocks": 8192, 00:17:31.450 "uuid": "2bb10589-018b-4697-9d2b-f4879f3a88b7", 00:17:31.450 "md_size": 32, 00:17:31.450 "md_interleave": true, 00:17:31.450 "dif_type": 0, 00:17:31.450 "assigned_rate_limits": { 00:17:31.450 "rw_ios_per_sec": 0, 00:17:31.450 "rw_mbytes_per_sec": 0, 00:17:31.450 "r_mbytes_per_sec": 0, 00:17:31.450 "w_mbytes_per_sec": 0 00:17:31.450 }, 00:17:31.450 "claimed": true, 00:17:31.450 "claim_type": "exclusive_write", 00:17:31.450 "zoned": false, 00:17:31.450 "supported_io_types": { 00:17:31.450 "read": true, 00:17:31.450 "write": true, 00:17:31.450 "unmap": true, 00:17:31.450 "flush": true, 00:17:31.450 "reset": true, 00:17:31.450 "nvme_admin": false, 00:17:31.450 "nvme_io": false, 00:17:31.450 "nvme_io_md": false, 00:17:31.450 "write_zeroes": true, 00:17:31.450 "zcopy": true, 00:17:31.450 "get_zone_info": false, 00:17:31.450 "zone_management": false, 00:17:31.450 "zone_append": false, 00:17:31.450 "compare": false, 00:17:31.450 "compare_and_write": false, 00:17:31.450 "abort": true, 00:17:31.450 "seek_hole": false, 00:17:31.450 "seek_data": false, 00:17:31.450 "copy": true, 00:17:31.450 "nvme_iov_md": false 00:17:31.450 }, 00:17:31.450 "memory_domains": [ 00:17:31.450 { 00:17:31.450 "dma_device_id": "system", 00:17:31.450 "dma_device_type": 1 00:17:31.450 }, 00:17:31.450 { 00:17:31.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.450 "dma_device_type": 2 00:17:31.450 } 00:17:31.450 ], 00:17:31.450 "driver_specific": {} 00:17:31.450 } 00:17:31.450 ] 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.450 "name": "Existed_Raid", 00:17:31.450 "uuid": "09d25e39-a02d-41f9-821b-47fd5e279fc5", 00:17:31.450 "strip_size_kb": 0, 00:17:31.450 "state": "configuring", 00:17:31.450 "raid_level": "raid1", 00:17:31.450 "superblock": true, 00:17:31.450 "num_base_bdevs": 2, 00:17:31.450 "num_base_bdevs_discovered": 1, 00:17:31.450 "num_base_bdevs_operational": 2, 00:17:31.450 "base_bdevs_list": [ 00:17:31.450 { 00:17:31.450 "name": "BaseBdev1", 00:17:31.450 "uuid": "2bb10589-018b-4697-9d2b-f4879f3a88b7", 00:17:31.450 "is_configured": true, 00:17:31.450 "data_offset": 256, 00:17:31.450 "data_size": 7936 00:17:31.450 }, 00:17:31.450 { 00:17:31.450 "name": "BaseBdev2", 00:17:31.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.450 "is_configured": false, 00:17:31.450 "data_offset": 0, 00:17:31.450 "data_size": 0 00:17:31.450 } 00:17:31.450 ] 00:17:31.450 }' 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.450 18:58:14 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.021 [2024-11-16 18:58:15.288640] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:32.021 [2024-11-16 18:58:15.288738] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.021 [2024-11-16 18:58:15.296708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:32.021 [2024-11-16 18:58:15.298448] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.021 [2024-11-16 18:58:15.298536] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.021 "name": "Existed_Raid", 00:17:32.021 "uuid": "2e545014-777c-4189-8633-d4bb5974479a", 00:17:32.021 "strip_size_kb": 0, 00:17:32.021 "state": "configuring", 00:17:32.021 "raid_level": "raid1", 00:17:32.021 "superblock": true, 00:17:32.021 "num_base_bdevs": 2, 00:17:32.021 "num_base_bdevs_discovered": 1, 00:17:32.021 "num_base_bdevs_operational": 2, 00:17:32.021 "base_bdevs_list": [ 00:17:32.021 { 00:17:32.021 "name": "BaseBdev1", 00:17:32.021 "uuid": "2bb10589-018b-4697-9d2b-f4879f3a88b7", 00:17:32.021 "is_configured": true, 00:17:32.021 "data_offset": 256, 00:17:32.021 "data_size": 7936 00:17:32.021 }, 00:17:32.021 { 00:17:32.021 "name": "BaseBdev2", 00:17:32.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.021 "is_configured": false, 00:17:32.021 "data_offset": 0, 00:17:32.021 "data_size": 0 00:17:32.021 } 00:17:32.021 ] 00:17:32.021 }' 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.021 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.282 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:32.283 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.283 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.283 [2024-11-16 18:58:15.752137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:32.283 [2024-11-16 18:58:15.752318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:32.283 [2024-11-16 18:58:15.752331] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:32.283 [2024-11-16 18:58:15.752414] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:32.283 [2024-11-16 18:58:15.752597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:32.283 [2024-11-16 18:58:15.752608] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:32.283 [2024-11-16 18:58:15.752684] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.283 BaseBdev2 00:17:32.283 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.544 [ 00:17:32.544 { 00:17:32.544 "name": "BaseBdev2", 00:17:32.544 "aliases": [ 00:17:32.544 "34490020-ebdf-4274-820f-4215ec89c320" 00:17:32.544 ], 00:17:32.544 "product_name": "Malloc disk", 00:17:32.544 "block_size": 4128, 00:17:32.544 "num_blocks": 8192, 00:17:32.544 "uuid": "34490020-ebdf-4274-820f-4215ec89c320", 00:17:32.544 "md_size": 32, 00:17:32.544 "md_interleave": true, 00:17:32.544 "dif_type": 0, 00:17:32.544 "assigned_rate_limits": { 00:17:32.544 "rw_ios_per_sec": 0, 00:17:32.544 "rw_mbytes_per_sec": 0, 00:17:32.544 "r_mbytes_per_sec": 0, 00:17:32.544 "w_mbytes_per_sec": 0 00:17:32.544 }, 00:17:32.544 "claimed": true, 00:17:32.544 "claim_type": "exclusive_write", 00:17:32.544 "zoned": false, 00:17:32.544 "supported_io_types": { 00:17:32.544 "read": true, 00:17:32.544 "write": true, 00:17:32.544 "unmap": true, 00:17:32.544 "flush": true, 00:17:32.544 "reset": true, 00:17:32.544 "nvme_admin": false, 00:17:32.544 "nvme_io": false, 00:17:32.544 "nvme_io_md": false, 00:17:32.544 "write_zeroes": true, 00:17:32.544 "zcopy": true, 00:17:32.544 "get_zone_info": false, 00:17:32.544 "zone_management": false, 00:17:32.544 "zone_append": false, 00:17:32.544 "compare": false, 00:17:32.544 "compare_and_write": false, 00:17:32.544 "abort": true, 00:17:32.544 "seek_hole": false, 00:17:32.544 "seek_data": false, 00:17:32.544 "copy": true, 00:17:32.544 "nvme_iov_md": false 00:17:32.544 }, 00:17:32.544 "memory_domains": [ 00:17:32.544 { 00:17:32.544 "dma_device_id": "system", 00:17:32.544 "dma_device_type": 1 00:17:32.544 }, 00:17:32.544 { 00:17:32.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.544 "dma_device_type": 2 00:17:32.544 } 00:17:32.544 ], 00:17:32.544 "driver_specific": {} 00:17:32.544 } 00:17:32.544 ] 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.544 "name": "Existed_Raid", 00:17:32.544 "uuid": "2e545014-777c-4189-8633-d4bb5974479a", 00:17:32.544 "strip_size_kb": 0, 00:17:32.544 "state": "online", 00:17:32.544 "raid_level": "raid1", 00:17:32.544 "superblock": true, 00:17:32.544 "num_base_bdevs": 2, 00:17:32.544 "num_base_bdevs_discovered": 2, 00:17:32.544 "num_base_bdevs_operational": 2, 00:17:32.544 "base_bdevs_list": [ 00:17:32.544 { 00:17:32.544 "name": "BaseBdev1", 00:17:32.544 "uuid": "2bb10589-018b-4697-9d2b-f4879f3a88b7", 00:17:32.544 "is_configured": true, 00:17:32.544 "data_offset": 256, 00:17:32.544 "data_size": 7936 00:17:32.544 }, 00:17:32.544 { 00:17:32.544 "name": "BaseBdev2", 00:17:32.544 "uuid": "34490020-ebdf-4274-820f-4215ec89c320", 00:17:32.544 "is_configured": true, 00:17:32.544 "data_offset": 256, 00:17:32.544 "data_size": 7936 00:17:32.544 } 00:17:32.544 ] 00:17:32.544 }' 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.544 18:58:15 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:32.805 [2024-11-16 18:58:16.211692] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:32.805 "name": "Existed_Raid", 00:17:32.805 "aliases": [ 00:17:32.805 "2e545014-777c-4189-8633-d4bb5974479a" 00:17:32.805 ], 00:17:32.805 "product_name": "Raid Volume", 00:17:32.805 "block_size": 4128, 00:17:32.805 "num_blocks": 7936, 00:17:32.805 "uuid": "2e545014-777c-4189-8633-d4bb5974479a", 00:17:32.805 "md_size": 32, 00:17:32.805 "md_interleave": true, 00:17:32.805 "dif_type": 0, 00:17:32.805 "assigned_rate_limits": { 00:17:32.805 "rw_ios_per_sec": 0, 00:17:32.805 "rw_mbytes_per_sec": 0, 00:17:32.805 "r_mbytes_per_sec": 0, 00:17:32.805 "w_mbytes_per_sec": 0 00:17:32.805 }, 00:17:32.805 "claimed": false, 00:17:32.805 "zoned": false, 00:17:32.805 "supported_io_types": { 00:17:32.805 "read": true, 00:17:32.805 "write": true, 00:17:32.805 "unmap": false, 00:17:32.805 "flush": false, 00:17:32.805 "reset": true, 00:17:32.805 "nvme_admin": false, 00:17:32.805 "nvme_io": false, 00:17:32.805 "nvme_io_md": false, 00:17:32.805 "write_zeroes": true, 00:17:32.805 "zcopy": false, 00:17:32.805 "get_zone_info": false, 00:17:32.805 "zone_management": false, 00:17:32.805 "zone_append": false, 00:17:32.805 "compare": false, 00:17:32.805 "compare_and_write": false, 00:17:32.805 "abort": false, 00:17:32.805 "seek_hole": false, 00:17:32.805 "seek_data": false, 00:17:32.805 "copy": false, 00:17:32.805 "nvme_iov_md": false 00:17:32.805 }, 00:17:32.805 "memory_domains": [ 00:17:32.805 { 00:17:32.805 "dma_device_id": "system", 00:17:32.805 "dma_device_type": 1 00:17:32.805 }, 00:17:32.805 { 00:17:32.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.805 "dma_device_type": 2 00:17:32.805 }, 00:17:32.805 { 00:17:32.805 "dma_device_id": "system", 00:17:32.805 "dma_device_type": 1 00:17:32.805 }, 00:17:32.805 { 00:17:32.805 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.805 "dma_device_type": 2 00:17:32.805 } 00:17:32.805 ], 00:17:32.805 "driver_specific": { 00:17:32.805 "raid": { 00:17:32.805 "uuid": "2e545014-777c-4189-8633-d4bb5974479a", 00:17:32.805 "strip_size_kb": 0, 00:17:32.805 "state": "online", 00:17:32.805 "raid_level": "raid1", 00:17:32.805 "superblock": true, 00:17:32.805 "num_base_bdevs": 2, 00:17:32.805 "num_base_bdevs_discovered": 2, 00:17:32.805 "num_base_bdevs_operational": 2, 00:17:32.805 "base_bdevs_list": [ 00:17:32.805 { 00:17:32.805 "name": "BaseBdev1", 00:17:32.805 "uuid": "2bb10589-018b-4697-9d2b-f4879f3a88b7", 00:17:32.805 "is_configured": true, 00:17:32.805 "data_offset": 256, 00:17:32.805 "data_size": 7936 00:17:32.805 }, 00:17:32.805 { 00:17:32.805 "name": "BaseBdev2", 00:17:32.805 "uuid": "34490020-ebdf-4274-820f-4215ec89c320", 00:17:32.805 "is_configured": true, 00:17:32.805 "data_offset": 256, 00:17:32.805 "data_size": 7936 00:17:32.805 } 00:17:32.805 ] 00:17:32.805 } 00:17:32.805 } 00:17:32.805 }' 00:17:32.805 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:33.066 BaseBdev2' 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.066 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.066 [2024-11-16 18:58:16.463007] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.326 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.326 "name": "Existed_Raid", 00:17:33.326 "uuid": "2e545014-777c-4189-8633-d4bb5974479a", 00:17:33.326 "strip_size_kb": 0, 00:17:33.326 "state": "online", 00:17:33.326 "raid_level": "raid1", 00:17:33.326 "superblock": true, 00:17:33.326 "num_base_bdevs": 2, 00:17:33.326 "num_base_bdevs_discovered": 1, 00:17:33.326 "num_base_bdevs_operational": 1, 00:17:33.326 "base_bdevs_list": [ 00:17:33.326 { 00:17:33.326 "name": null, 00:17:33.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.326 "is_configured": false, 00:17:33.326 "data_offset": 0, 00:17:33.326 "data_size": 7936 00:17:33.326 }, 00:17:33.326 { 00:17:33.326 "name": "BaseBdev2", 00:17:33.326 "uuid": "34490020-ebdf-4274-820f-4215ec89c320", 00:17:33.326 "is_configured": true, 00:17:33.326 "data_offset": 256, 00:17:33.326 "data_size": 7936 00:17:33.326 } 00:17:33.326 ] 00:17:33.326 }' 00:17:33.327 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.327 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.586 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:33.586 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:33.586 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.586 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:33.586 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.586 18:58:16 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.587 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.587 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:33.587 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.587 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:33.587 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.587 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.587 [2024-11-16 18:58:17.041740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.587 [2024-11-16 18:58:17.041840] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.847 [2024-11-16 18:58:17.130216] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.847 [2024-11-16 18:58:17.130259] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.847 [2024-11-16 18:58:17.130271] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88097 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88097 ']' 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88097 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88097 00:17:33.847 killing process with pid 88097 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88097' 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88097 00:17:33.847 [2024-11-16 18:58:17.228245] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:33.847 18:58:17 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88097 00:17:33.847 [2024-11-16 18:58:17.243964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:35.230 18:58:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:17:35.230 00:17:35.230 real 0m4.800s 00:17:35.230 user 0m6.907s 00:17:35.230 sys 0m0.845s 00:17:35.230 18:58:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:35.230 18:58:18 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.230 ************************************ 00:17:35.230 END TEST raid_state_function_test_sb_md_interleaved 00:17:35.230 ************************************ 00:17:35.230 18:58:18 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:35.230 18:58:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:35.230 18:58:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:35.230 18:58:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:35.230 ************************************ 00:17:35.230 START TEST raid_superblock_test_md_interleaved 00:17:35.230 ************************************ 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88338 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88338 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88338 ']' 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:35.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:35.230 18:58:18 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.230 [2024-11-16 18:58:18.448383] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:35.230 [2024-11-16 18:58:18.448500] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88338 ] 00:17:35.230 [2024-11-16 18:58:18.626276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.490 [2024-11-16 18:58:18.732287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.491 [2024-11-16 18:58:18.935121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.491 [2024-11-16 18:58:18.935179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.060 malloc1 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.060 [2024-11-16 18:58:19.282588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.060 [2024-11-16 18:58:19.282661] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.060 [2024-11-16 18:58:19.282682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:36.060 [2024-11-16 18:58:19.282691] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.060 [2024-11-16 18:58:19.284418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.060 [2024-11-16 18:58:19.284447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.060 pt1 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.060 malloc2 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.060 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.060 [2024-11-16 18:58:19.330742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.060 [2024-11-16 18:58:19.330788] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.060 [2024-11-16 18:58:19.330807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:36.060 [2024-11-16 18:58:19.330815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.060 [2024-11-16 18:58:19.332557] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.060 [2024-11-16 18:58:19.332588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.061 pt2 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.061 [2024-11-16 18:58:19.342756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.061 [2024-11-16 18:58:19.344464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.061 [2024-11-16 18:58:19.344632] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:36.061 [2024-11-16 18:58:19.344644] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:36.061 [2024-11-16 18:58:19.344720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:36.061 [2024-11-16 18:58:19.344799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:36.061 [2024-11-16 18:58:19.344810] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:36.061 [2024-11-16 18:58:19.344873] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.061 "name": "raid_bdev1", 00:17:36.061 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:36.061 "strip_size_kb": 0, 00:17:36.061 "state": "online", 00:17:36.061 "raid_level": "raid1", 00:17:36.061 "superblock": true, 00:17:36.061 "num_base_bdevs": 2, 00:17:36.061 "num_base_bdevs_discovered": 2, 00:17:36.061 "num_base_bdevs_operational": 2, 00:17:36.061 "base_bdevs_list": [ 00:17:36.061 { 00:17:36.061 "name": "pt1", 00:17:36.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.061 "is_configured": true, 00:17:36.061 "data_offset": 256, 00:17:36.061 "data_size": 7936 00:17:36.061 }, 00:17:36.061 { 00:17:36.061 "name": "pt2", 00:17:36.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.061 "is_configured": true, 00:17:36.061 "data_offset": 256, 00:17:36.061 "data_size": 7936 00:17:36.061 } 00:17:36.061 ] 00:17:36.061 }' 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.061 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.321 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:36.321 [2024-11-16 18:58:19.786166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.581 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.581 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:36.581 "name": "raid_bdev1", 00:17:36.581 "aliases": [ 00:17:36.581 "126bcca5-4a2b-43f2-afac-9b060260af6f" 00:17:36.581 ], 00:17:36.581 "product_name": "Raid Volume", 00:17:36.581 "block_size": 4128, 00:17:36.581 "num_blocks": 7936, 00:17:36.581 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:36.581 "md_size": 32, 00:17:36.581 "md_interleave": true, 00:17:36.581 "dif_type": 0, 00:17:36.581 "assigned_rate_limits": { 00:17:36.581 "rw_ios_per_sec": 0, 00:17:36.581 "rw_mbytes_per_sec": 0, 00:17:36.581 "r_mbytes_per_sec": 0, 00:17:36.581 "w_mbytes_per_sec": 0 00:17:36.581 }, 00:17:36.581 "claimed": false, 00:17:36.581 "zoned": false, 00:17:36.581 "supported_io_types": { 00:17:36.581 "read": true, 00:17:36.581 "write": true, 00:17:36.581 "unmap": false, 00:17:36.581 "flush": false, 00:17:36.582 "reset": true, 00:17:36.582 "nvme_admin": false, 00:17:36.582 "nvme_io": false, 00:17:36.582 "nvme_io_md": false, 00:17:36.582 "write_zeroes": true, 00:17:36.582 "zcopy": false, 00:17:36.582 "get_zone_info": false, 00:17:36.582 "zone_management": false, 00:17:36.582 "zone_append": false, 00:17:36.582 "compare": false, 00:17:36.582 "compare_and_write": false, 00:17:36.582 "abort": false, 00:17:36.582 "seek_hole": false, 00:17:36.582 "seek_data": false, 00:17:36.582 "copy": false, 00:17:36.582 "nvme_iov_md": false 00:17:36.582 }, 00:17:36.582 "memory_domains": [ 00:17:36.582 { 00:17:36.582 "dma_device_id": "system", 00:17:36.582 "dma_device_type": 1 00:17:36.582 }, 00:17:36.582 { 00:17:36.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.582 "dma_device_type": 2 00:17:36.582 }, 00:17:36.582 { 00:17:36.582 "dma_device_id": "system", 00:17:36.582 "dma_device_type": 1 00:17:36.582 }, 00:17:36.582 { 00:17:36.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.582 "dma_device_type": 2 00:17:36.582 } 00:17:36.582 ], 00:17:36.582 "driver_specific": { 00:17:36.582 "raid": { 00:17:36.582 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:36.582 "strip_size_kb": 0, 00:17:36.582 "state": "online", 00:17:36.582 "raid_level": "raid1", 00:17:36.582 "superblock": true, 00:17:36.582 "num_base_bdevs": 2, 00:17:36.582 "num_base_bdevs_discovered": 2, 00:17:36.582 "num_base_bdevs_operational": 2, 00:17:36.582 "base_bdevs_list": [ 00:17:36.582 { 00:17:36.582 "name": "pt1", 00:17:36.582 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.582 "is_configured": true, 00:17:36.582 "data_offset": 256, 00:17:36.582 "data_size": 7936 00:17:36.582 }, 00:17:36.582 { 00:17:36.582 "name": "pt2", 00:17:36.582 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.582 "is_configured": true, 00:17:36.582 "data_offset": 256, 00:17:36.582 "data_size": 7936 00:17:36.582 } 00:17:36.582 ] 00:17:36.582 } 00:17:36.582 } 00:17:36.582 }' 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:36.582 pt2' 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:36.582 18:58:19 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.582 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:36.582 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:36.582 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:36.582 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:36.582 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.582 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.582 [2024-11-16 18:58:20.017765] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.582 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=126bcca5-4a2b-43f2-afac-9b060260af6f 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 126bcca5-4a2b-43f2-afac-9b060260af6f ']' 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.843 [2024-11-16 18:58:20.061437] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.843 [2024-11-16 18:58:20.061463] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:36.843 [2024-11-16 18:58:20.061531] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:36.843 [2024-11-16 18:58:20.061589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:36.843 [2024-11-16 18:58:20.061599] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:36.843 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.844 [2024-11-16 18:58:20.193251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:36.844 [2024-11-16 18:58:20.195033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:36.844 [2024-11-16 18:58:20.195119] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:36.844 [2024-11-16 18:58:20.195160] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:36.844 [2024-11-16 18:58:20.195173] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:36.844 [2024-11-16 18:58:20.195182] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:36.844 request: 00:17:36.844 { 00:17:36.844 "name": "raid_bdev1", 00:17:36.844 "raid_level": "raid1", 00:17:36.844 "base_bdevs": [ 00:17:36.844 "malloc1", 00:17:36.844 "malloc2" 00:17:36.844 ], 00:17:36.844 "superblock": false, 00:17:36.844 "method": "bdev_raid_create", 00:17:36.844 "req_id": 1 00:17:36.844 } 00:17:36.844 Got JSON-RPC error response 00:17:36.844 response: 00:17:36.844 { 00:17:36.844 "code": -17, 00:17:36.844 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:36.844 } 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.844 [2024-11-16 18:58:20.237149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:36.844 [2024-11-16 18:58:20.237192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.844 [2024-11-16 18:58:20.237206] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:36.844 [2024-11-16 18:58:20.237215] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.844 [2024-11-16 18:58:20.238973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.844 [2024-11-16 18:58:20.239006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:36.844 [2024-11-16 18:58:20.239043] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:36.844 [2024-11-16 18:58:20.239097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.844 pt1 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:36.844 "name": "raid_bdev1", 00:17:36.844 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:36.844 "strip_size_kb": 0, 00:17:36.844 "state": "configuring", 00:17:36.844 "raid_level": "raid1", 00:17:36.844 "superblock": true, 00:17:36.844 "num_base_bdevs": 2, 00:17:36.844 "num_base_bdevs_discovered": 1, 00:17:36.844 "num_base_bdevs_operational": 2, 00:17:36.844 "base_bdevs_list": [ 00:17:36.844 { 00:17:36.844 "name": "pt1", 00:17:36.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.844 "is_configured": true, 00:17:36.844 "data_offset": 256, 00:17:36.844 "data_size": 7936 00:17:36.844 }, 00:17:36.844 { 00:17:36.844 "name": null, 00:17:36.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.844 "is_configured": false, 00:17:36.844 "data_offset": 256, 00:17:36.844 "data_size": 7936 00:17:36.844 } 00:17:36.844 ] 00:17:36.844 }' 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:36.844 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.415 [2024-11-16 18:58:20.708324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:37.415 [2024-11-16 18:58:20.708370] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:37.415 [2024-11-16 18:58:20.708385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:37.415 [2024-11-16 18:58:20.708394] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:37.415 [2024-11-16 18:58:20.708491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:37.415 [2024-11-16 18:58:20.708502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:37.415 [2024-11-16 18:58:20.708534] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:37.415 [2024-11-16 18:58:20.708553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:37.415 [2024-11-16 18:58:20.708619] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:37.415 [2024-11-16 18:58:20.708628] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:37.415 [2024-11-16 18:58:20.708697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:37.415 [2024-11-16 18:58:20.708762] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:37.415 [2024-11-16 18:58:20.708770] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:37.415 [2024-11-16 18:58:20.708817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.415 pt2 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.415 "name": "raid_bdev1", 00:17:37.415 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:37.415 "strip_size_kb": 0, 00:17:37.415 "state": "online", 00:17:37.415 "raid_level": "raid1", 00:17:37.415 "superblock": true, 00:17:37.415 "num_base_bdevs": 2, 00:17:37.415 "num_base_bdevs_discovered": 2, 00:17:37.415 "num_base_bdevs_operational": 2, 00:17:37.415 "base_bdevs_list": [ 00:17:37.415 { 00:17:37.415 "name": "pt1", 00:17:37.415 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.415 "is_configured": true, 00:17:37.415 "data_offset": 256, 00:17:37.415 "data_size": 7936 00:17:37.415 }, 00:17:37.415 { 00:17:37.415 "name": "pt2", 00:17:37.415 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.415 "is_configured": true, 00:17:37.415 "data_offset": 256, 00:17:37.415 "data_size": 7936 00:17:37.415 } 00:17:37.415 ] 00:17:37.415 }' 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.415 18:58:20 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.986 [2024-11-16 18:58:21.167888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.986 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:37.986 "name": "raid_bdev1", 00:17:37.986 "aliases": [ 00:17:37.986 "126bcca5-4a2b-43f2-afac-9b060260af6f" 00:17:37.986 ], 00:17:37.986 "product_name": "Raid Volume", 00:17:37.986 "block_size": 4128, 00:17:37.986 "num_blocks": 7936, 00:17:37.986 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:37.986 "md_size": 32, 00:17:37.986 "md_interleave": true, 00:17:37.986 "dif_type": 0, 00:17:37.986 "assigned_rate_limits": { 00:17:37.986 "rw_ios_per_sec": 0, 00:17:37.986 "rw_mbytes_per_sec": 0, 00:17:37.986 "r_mbytes_per_sec": 0, 00:17:37.987 "w_mbytes_per_sec": 0 00:17:37.987 }, 00:17:37.987 "claimed": false, 00:17:37.987 "zoned": false, 00:17:37.987 "supported_io_types": { 00:17:37.987 "read": true, 00:17:37.987 "write": true, 00:17:37.987 "unmap": false, 00:17:37.987 "flush": false, 00:17:37.987 "reset": true, 00:17:37.987 "nvme_admin": false, 00:17:37.987 "nvme_io": false, 00:17:37.987 "nvme_io_md": false, 00:17:37.987 "write_zeroes": true, 00:17:37.987 "zcopy": false, 00:17:37.987 "get_zone_info": false, 00:17:37.987 "zone_management": false, 00:17:37.987 "zone_append": false, 00:17:37.987 "compare": false, 00:17:37.987 "compare_and_write": false, 00:17:37.987 "abort": false, 00:17:37.987 "seek_hole": false, 00:17:37.987 "seek_data": false, 00:17:37.987 "copy": false, 00:17:37.987 "nvme_iov_md": false 00:17:37.987 }, 00:17:37.987 "memory_domains": [ 00:17:37.987 { 00:17:37.987 "dma_device_id": "system", 00:17:37.987 "dma_device_type": 1 00:17:37.987 }, 00:17:37.987 { 00:17:37.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.987 "dma_device_type": 2 00:17:37.987 }, 00:17:37.987 { 00:17:37.987 "dma_device_id": "system", 00:17:37.987 "dma_device_type": 1 00:17:37.987 }, 00:17:37.987 { 00:17:37.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.987 "dma_device_type": 2 00:17:37.987 } 00:17:37.987 ], 00:17:37.987 "driver_specific": { 00:17:37.987 "raid": { 00:17:37.987 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:37.987 "strip_size_kb": 0, 00:17:37.987 "state": "online", 00:17:37.987 "raid_level": "raid1", 00:17:37.987 "superblock": true, 00:17:37.987 "num_base_bdevs": 2, 00:17:37.987 "num_base_bdevs_discovered": 2, 00:17:37.987 "num_base_bdevs_operational": 2, 00:17:37.987 "base_bdevs_list": [ 00:17:37.987 { 00:17:37.987 "name": "pt1", 00:17:37.987 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.987 "is_configured": true, 00:17:37.987 "data_offset": 256, 00:17:37.987 "data_size": 7936 00:17:37.987 }, 00:17:37.987 { 00:17:37.987 "name": "pt2", 00:17:37.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.987 "is_configured": true, 00:17:37.987 "data_offset": 256, 00:17:37.987 "data_size": 7936 00:17:37.987 } 00:17:37.987 ] 00:17:37.987 } 00:17:37.987 } 00:17:37.987 }' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:37.987 pt2' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.987 [2024-11-16 18:58:21.375446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 126bcca5-4a2b-43f2-afac-9b060260af6f '!=' 126bcca5-4a2b-43f2-afac-9b060260af6f ']' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.987 [2024-11-16 18:58:21.419176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:37.987 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.247 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.247 "name": "raid_bdev1", 00:17:38.247 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:38.247 "strip_size_kb": 0, 00:17:38.247 "state": "online", 00:17:38.247 "raid_level": "raid1", 00:17:38.247 "superblock": true, 00:17:38.247 "num_base_bdevs": 2, 00:17:38.247 "num_base_bdevs_discovered": 1, 00:17:38.247 "num_base_bdevs_operational": 1, 00:17:38.247 "base_bdevs_list": [ 00:17:38.247 { 00:17:38.247 "name": null, 00:17:38.247 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.247 "is_configured": false, 00:17:38.247 "data_offset": 0, 00:17:38.247 "data_size": 7936 00:17:38.247 }, 00:17:38.247 { 00:17:38.247 "name": "pt2", 00:17:38.247 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.247 "is_configured": true, 00:17:38.247 "data_offset": 256, 00:17:38.247 "data_size": 7936 00:17:38.247 } 00:17:38.247 ] 00:17:38.247 }' 00:17:38.248 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.248 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 [2024-11-16 18:58:21.874351] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.508 [2024-11-16 18:58:21.874376] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.508 [2024-11-16 18:58:21.874428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.508 [2024-11-16 18:58:21.874469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.508 [2024-11-16 18:58:21.874483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 [2024-11-16 18:58:21.934262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:38.508 [2024-11-16 18:58:21.934309] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:38.508 [2024-11-16 18:58:21.934323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:38.508 [2024-11-16 18:58:21.934333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:38.508 [2024-11-16 18:58:21.936109] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:38.508 [2024-11-16 18:58:21.936142] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:38.508 [2024-11-16 18:58:21.936185] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:38.508 [2024-11-16 18:58:21.936231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:38.508 [2024-11-16 18:58:21.936286] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:38.508 [2024-11-16 18:58:21.936302] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:38.508 [2024-11-16 18:58:21.936384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:38.508 [2024-11-16 18:58:21.936447] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:38.508 [2024-11-16 18:58:21.936454] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:38.508 [2024-11-16 18:58:21.936507] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:38.508 pt2 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.508 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.769 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:38.769 "name": "raid_bdev1", 00:17:38.769 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:38.769 "strip_size_kb": 0, 00:17:38.769 "state": "online", 00:17:38.769 "raid_level": "raid1", 00:17:38.769 "superblock": true, 00:17:38.769 "num_base_bdevs": 2, 00:17:38.769 "num_base_bdevs_discovered": 1, 00:17:38.769 "num_base_bdevs_operational": 1, 00:17:38.769 "base_bdevs_list": [ 00:17:38.769 { 00:17:38.769 "name": null, 00:17:38.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:38.769 "is_configured": false, 00:17:38.769 "data_offset": 256, 00:17:38.769 "data_size": 7936 00:17:38.769 }, 00:17:38.769 { 00:17:38.769 "name": "pt2", 00:17:38.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.769 "is_configured": true, 00:17:38.769 "data_offset": 256, 00:17:38.769 "data_size": 7936 00:17:38.769 } 00:17:38.769 ] 00:17:38.769 }' 00:17:38.769 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:38.769 18:58:21 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.029 [2024-11-16 18:58:22.369500] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.029 [2024-11-16 18:58:22.369530] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:39.029 [2024-11-16 18:58:22.369585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.029 [2024-11-16 18:58:22.369627] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.029 [2024-11-16 18:58:22.369635] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.029 [2024-11-16 18:58:22.433409] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:39.029 [2024-11-16 18:58:22.433475] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.029 [2024-11-16 18:58:22.433493] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:39.029 [2024-11-16 18:58:22.433501] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.029 [2024-11-16 18:58:22.435326] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.029 [2024-11-16 18:58:22.435358] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:39.029 [2024-11-16 18:58:22.435405] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:39.029 [2024-11-16 18:58:22.435454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:39.029 [2024-11-16 18:58:22.435538] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:39.029 [2024-11-16 18:58:22.435552] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:39.029 [2024-11-16 18:58:22.435567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:39.029 [2024-11-16 18:58:22.435631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:39.029 [2024-11-16 18:58:22.435705] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:39.029 [2024-11-16 18:58:22.435713] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:39.029 [2024-11-16 18:58:22.435768] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:39.029 [2024-11-16 18:58:22.435828] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:39.029 [2024-11-16 18:58:22.435855] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:39.029 [2024-11-16 18:58:22.435923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.029 pt1 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:39.029 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.030 "name": "raid_bdev1", 00:17:39.030 "uuid": "126bcca5-4a2b-43f2-afac-9b060260af6f", 00:17:39.030 "strip_size_kb": 0, 00:17:39.030 "state": "online", 00:17:39.030 "raid_level": "raid1", 00:17:39.030 "superblock": true, 00:17:39.030 "num_base_bdevs": 2, 00:17:39.030 "num_base_bdevs_discovered": 1, 00:17:39.030 "num_base_bdevs_operational": 1, 00:17:39.030 "base_bdevs_list": [ 00:17:39.030 { 00:17:39.030 "name": null, 00:17:39.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:39.030 "is_configured": false, 00:17:39.030 "data_offset": 256, 00:17:39.030 "data_size": 7936 00:17:39.030 }, 00:17:39.030 { 00:17:39.030 "name": "pt2", 00:17:39.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:39.030 "is_configured": true, 00:17:39.030 "data_offset": 256, 00:17:39.030 "data_size": 7936 00:17:39.030 } 00:17:39.030 ] 00:17:39.030 }' 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.030 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.601 [2024-11-16 18:58:22.904774] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 126bcca5-4a2b-43f2-afac-9b060260af6f '!=' 126bcca5-4a2b-43f2-afac-9b060260af6f ']' 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88338 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88338 ']' 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88338 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88338 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.601 killing process with pid 88338 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88338' 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88338 00:17:39.601 [2024-11-16 18:58:22.983180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:39.601 [2024-11-16 18:58:22.983244] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:39.601 [2024-11-16 18:58:22.983293] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:39.601 [2024-11-16 18:58:22.983305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:39.601 18:58:22 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88338 00:17:39.860 [2024-11-16 18:58:23.174715] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.799 18:58:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:17:40.799 00:17:40.799 real 0m5.857s 00:17:40.799 user 0m8.819s 00:17:40.799 sys 0m1.162s 00:17:40.799 18:58:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:40.799 18:58:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:40.799 ************************************ 00:17:40.799 END TEST raid_superblock_test_md_interleaved 00:17:40.799 ************************************ 00:17:40.799 18:58:24 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:40.799 18:58:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:40.799 18:58:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:40.799 18:58:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:41.059 ************************************ 00:17:41.059 START TEST raid_rebuild_test_sb_md_interleaved 00:17:41.059 ************************************ 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:41.059 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88663 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88663 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88663 ']' 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:41.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:41.060 18:58:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.060 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:41.060 Zero copy mechanism will not be used. 00:17:41.060 [2024-11-16 18:58:24.395562] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:41.060 [2024-11-16 18:58:24.395693] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88663 ] 00:17:41.319 [2024-11-16 18:58:24.573638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.320 [2024-11-16 18:58:24.675430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.579 [2024-11-16 18:58:24.852314] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.579 [2024-11-16 18:58:24.852376] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.839 BaseBdev1_malloc 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.839 [2024-11-16 18:58:25.237560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:41.839 [2024-11-16 18:58:25.237640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.839 [2024-11-16 18:58:25.237674] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:41.839 [2024-11-16 18:58:25.237686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.839 [2024-11-16 18:58:25.239373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.839 [2024-11-16 18:58:25.239404] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:41.839 BaseBdev1 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.839 BaseBdev2_malloc 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.839 [2024-11-16 18:58:25.290803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:41.839 [2024-11-16 18:58:25.290876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.839 [2024-11-16 18:58:25.290895] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:41.839 [2024-11-16 18:58:25.290908] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.839 [2024-11-16 18:58:25.292702] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.839 [2024-11-16 18:58:25.292733] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:41.839 BaseBdev2 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.839 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.100 spare_malloc 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.100 spare_delay 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.100 [2024-11-16 18:58:25.390661] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:42.100 [2024-11-16 18:58:25.390724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.100 [2024-11-16 18:58:25.390743] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:42.100 [2024-11-16 18:58:25.390753] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.100 [2024-11-16 18:58:25.392516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.100 [2024-11-16 18:58:25.392550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:42.100 spare 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.100 [2024-11-16 18:58:25.402678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:42.100 [2024-11-16 18:58:25.404443] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:42.100 [2024-11-16 18:58:25.404637] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:42.100 [2024-11-16 18:58:25.404662] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:42.100 [2024-11-16 18:58:25.404739] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:42.100 [2024-11-16 18:58:25.404808] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:42.100 [2024-11-16 18:58:25.404817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:42.100 [2024-11-16 18:58:25.404893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.100 "name": "raid_bdev1", 00:17:42.100 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:42.100 "strip_size_kb": 0, 00:17:42.100 "state": "online", 00:17:42.100 "raid_level": "raid1", 00:17:42.100 "superblock": true, 00:17:42.100 "num_base_bdevs": 2, 00:17:42.100 "num_base_bdevs_discovered": 2, 00:17:42.100 "num_base_bdevs_operational": 2, 00:17:42.100 "base_bdevs_list": [ 00:17:42.100 { 00:17:42.100 "name": "BaseBdev1", 00:17:42.100 "uuid": "137f3e2f-bec3-5077-9a69-1615d4e903fe", 00:17:42.100 "is_configured": true, 00:17:42.100 "data_offset": 256, 00:17:42.100 "data_size": 7936 00:17:42.100 }, 00:17:42.100 { 00:17:42.100 "name": "BaseBdev2", 00:17:42.100 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:42.100 "is_configured": true, 00:17:42.100 "data_offset": 256, 00:17:42.100 "data_size": 7936 00:17:42.100 } 00:17:42.100 ] 00:17:42.100 }' 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.100 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.670 [2024-11-16 18:58:25.850106] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.670 [2024-11-16 18:58:25.945706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.670 "name": "raid_bdev1", 00:17:42.670 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:42.670 "strip_size_kb": 0, 00:17:42.670 "state": "online", 00:17:42.670 "raid_level": "raid1", 00:17:42.670 "superblock": true, 00:17:42.670 "num_base_bdevs": 2, 00:17:42.670 "num_base_bdevs_discovered": 1, 00:17:42.670 "num_base_bdevs_operational": 1, 00:17:42.670 "base_bdevs_list": [ 00:17:42.670 { 00:17:42.670 "name": null, 00:17:42.670 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.670 "is_configured": false, 00:17:42.670 "data_offset": 0, 00:17:42.670 "data_size": 7936 00:17:42.670 }, 00:17:42.670 { 00:17:42.670 "name": "BaseBdev2", 00:17:42.670 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:42.670 "is_configured": true, 00:17:42.670 "data_offset": 256, 00:17:42.670 "data_size": 7936 00:17:42.670 } 00:17:42.670 ] 00:17:42.670 }' 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.670 18:58:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.930 18:58:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:42.930 18:58:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.930 18:58:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:42.930 [2024-11-16 18:58:26.349008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.930 [2024-11-16 18:58:26.364679] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:42.930 18:58:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.930 18:58:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:42.931 [2024-11-16 18:58:26.366421] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.311 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.311 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.311 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.311 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.311 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.312 "name": "raid_bdev1", 00:17:44.312 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:44.312 "strip_size_kb": 0, 00:17:44.312 "state": "online", 00:17:44.312 "raid_level": "raid1", 00:17:44.312 "superblock": true, 00:17:44.312 "num_base_bdevs": 2, 00:17:44.312 "num_base_bdevs_discovered": 2, 00:17:44.312 "num_base_bdevs_operational": 2, 00:17:44.312 "process": { 00:17:44.312 "type": "rebuild", 00:17:44.312 "target": "spare", 00:17:44.312 "progress": { 00:17:44.312 "blocks": 2560, 00:17:44.312 "percent": 32 00:17:44.312 } 00:17:44.312 }, 00:17:44.312 "base_bdevs_list": [ 00:17:44.312 { 00:17:44.312 "name": "spare", 00:17:44.312 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:44.312 "is_configured": true, 00:17:44.312 "data_offset": 256, 00:17:44.312 "data_size": 7936 00:17:44.312 }, 00:17:44.312 { 00:17:44.312 "name": "BaseBdev2", 00:17:44.312 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:44.312 "is_configured": true, 00:17:44.312 "data_offset": 256, 00:17:44.312 "data_size": 7936 00:17:44.312 } 00:17:44.312 ] 00:17:44.312 }' 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.312 [2024-11-16 18:58:27.506066] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.312 [2024-11-16 18:58:27.570981] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:44.312 [2024-11-16 18:58:27.571033] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.312 [2024-11-16 18:58:27.571046] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:44.312 [2024-11-16 18:58:27.571059] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:44.312 "name": "raid_bdev1", 00:17:44.312 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:44.312 "strip_size_kb": 0, 00:17:44.312 "state": "online", 00:17:44.312 "raid_level": "raid1", 00:17:44.312 "superblock": true, 00:17:44.312 "num_base_bdevs": 2, 00:17:44.312 "num_base_bdevs_discovered": 1, 00:17:44.312 "num_base_bdevs_operational": 1, 00:17:44.312 "base_bdevs_list": [ 00:17:44.312 { 00:17:44.312 "name": null, 00:17:44.312 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.312 "is_configured": false, 00:17:44.312 "data_offset": 0, 00:17:44.312 "data_size": 7936 00:17:44.312 }, 00:17:44.312 { 00:17:44.312 "name": "BaseBdev2", 00:17:44.312 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:44.312 "is_configured": true, 00:17:44.312 "data_offset": 256, 00:17:44.312 "data_size": 7936 00:17:44.312 } 00:17:44.312 ] 00:17:44.312 }' 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:44.312 18:58:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.572 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.832 "name": "raid_bdev1", 00:17:44.832 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:44.832 "strip_size_kb": 0, 00:17:44.832 "state": "online", 00:17:44.832 "raid_level": "raid1", 00:17:44.832 "superblock": true, 00:17:44.832 "num_base_bdevs": 2, 00:17:44.832 "num_base_bdevs_discovered": 1, 00:17:44.832 "num_base_bdevs_operational": 1, 00:17:44.832 "base_bdevs_list": [ 00:17:44.832 { 00:17:44.832 "name": null, 00:17:44.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.832 "is_configured": false, 00:17:44.832 "data_offset": 0, 00:17:44.832 "data_size": 7936 00:17:44.832 }, 00:17:44.832 { 00:17:44.832 "name": "BaseBdev2", 00:17:44.832 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:44.832 "is_configured": true, 00:17:44.832 "data_offset": 256, 00:17:44.832 "data_size": 7936 00:17:44.832 } 00:17:44.832 ] 00:17:44.832 }' 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:44.832 [2024-11-16 18:58:28.158383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:44.832 [2024-11-16 18:58:28.173465] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.832 18:58:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:44.832 [2024-11-16 18:58:28.175211] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.773 "name": "raid_bdev1", 00:17:45.773 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:45.773 "strip_size_kb": 0, 00:17:45.773 "state": "online", 00:17:45.773 "raid_level": "raid1", 00:17:45.773 "superblock": true, 00:17:45.773 "num_base_bdevs": 2, 00:17:45.773 "num_base_bdevs_discovered": 2, 00:17:45.773 "num_base_bdevs_operational": 2, 00:17:45.773 "process": { 00:17:45.773 "type": "rebuild", 00:17:45.773 "target": "spare", 00:17:45.773 "progress": { 00:17:45.773 "blocks": 2560, 00:17:45.773 "percent": 32 00:17:45.773 } 00:17:45.773 }, 00:17:45.773 "base_bdevs_list": [ 00:17:45.773 { 00:17:45.773 "name": "spare", 00:17:45.773 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:45.773 "is_configured": true, 00:17:45.773 "data_offset": 256, 00:17:45.773 "data_size": 7936 00:17:45.773 }, 00:17:45.773 { 00:17:45.773 "name": "BaseBdev2", 00:17:45.773 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:45.773 "is_configured": true, 00:17:45.773 "data_offset": 256, 00:17:45.773 "data_size": 7936 00:17:45.773 } 00:17:45.773 ] 00:17:45.773 }' 00:17:45.773 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:46.033 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=711 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.033 "name": "raid_bdev1", 00:17:46.033 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:46.033 "strip_size_kb": 0, 00:17:46.033 "state": "online", 00:17:46.033 "raid_level": "raid1", 00:17:46.033 "superblock": true, 00:17:46.033 "num_base_bdevs": 2, 00:17:46.033 "num_base_bdevs_discovered": 2, 00:17:46.033 "num_base_bdevs_operational": 2, 00:17:46.033 "process": { 00:17:46.033 "type": "rebuild", 00:17:46.033 "target": "spare", 00:17:46.033 "progress": { 00:17:46.033 "blocks": 2816, 00:17:46.033 "percent": 35 00:17:46.033 } 00:17:46.033 }, 00:17:46.033 "base_bdevs_list": [ 00:17:46.033 { 00:17:46.033 "name": "spare", 00:17:46.033 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:46.033 "is_configured": true, 00:17:46.033 "data_offset": 256, 00:17:46.033 "data_size": 7936 00:17:46.033 }, 00:17:46.033 { 00:17:46.033 "name": "BaseBdev2", 00:17:46.033 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:46.033 "is_configured": true, 00:17:46.033 "data_offset": 256, 00:17:46.033 "data_size": 7936 00:17:46.033 } 00:17:46.033 ] 00:17:46.033 }' 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.033 18:58:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.416 "name": "raid_bdev1", 00:17:47.416 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:47.416 "strip_size_kb": 0, 00:17:47.416 "state": "online", 00:17:47.416 "raid_level": "raid1", 00:17:47.416 "superblock": true, 00:17:47.416 "num_base_bdevs": 2, 00:17:47.416 "num_base_bdevs_discovered": 2, 00:17:47.416 "num_base_bdevs_operational": 2, 00:17:47.416 "process": { 00:17:47.416 "type": "rebuild", 00:17:47.416 "target": "spare", 00:17:47.416 "progress": { 00:17:47.416 "blocks": 5632, 00:17:47.416 "percent": 70 00:17:47.416 } 00:17:47.416 }, 00:17:47.416 "base_bdevs_list": [ 00:17:47.416 { 00:17:47.416 "name": "spare", 00:17:47.416 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:47.416 "is_configured": true, 00:17:47.416 "data_offset": 256, 00:17:47.416 "data_size": 7936 00:17:47.416 }, 00:17:47.416 { 00:17:47.416 "name": "BaseBdev2", 00:17:47.416 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:47.416 "is_configured": true, 00:17:47.416 "data_offset": 256, 00:17:47.416 "data_size": 7936 00:17:47.416 } 00:17:47.416 ] 00:17:47.416 }' 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.416 18:58:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.987 [2024-11-16 18:58:31.286396] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:47.987 [2024-11-16 18:58:31.286478] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:47.987 [2024-11-16 18:58:31.286561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.247 "name": "raid_bdev1", 00:17:48.247 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:48.247 "strip_size_kb": 0, 00:17:48.247 "state": "online", 00:17:48.247 "raid_level": "raid1", 00:17:48.247 "superblock": true, 00:17:48.247 "num_base_bdevs": 2, 00:17:48.247 "num_base_bdevs_discovered": 2, 00:17:48.247 "num_base_bdevs_operational": 2, 00:17:48.247 "base_bdevs_list": [ 00:17:48.247 { 00:17:48.247 "name": "spare", 00:17:48.247 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:48.247 "is_configured": true, 00:17:48.247 "data_offset": 256, 00:17:48.247 "data_size": 7936 00:17:48.247 }, 00:17:48.247 { 00:17:48.247 "name": "BaseBdev2", 00:17:48.247 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:48.247 "is_configured": true, 00:17:48.247 "data_offset": 256, 00:17:48.247 "data_size": 7936 00:17:48.247 } 00:17:48.247 ] 00:17:48.247 }' 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:48.247 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.507 "name": "raid_bdev1", 00:17:48.507 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:48.507 "strip_size_kb": 0, 00:17:48.507 "state": "online", 00:17:48.507 "raid_level": "raid1", 00:17:48.507 "superblock": true, 00:17:48.507 "num_base_bdevs": 2, 00:17:48.507 "num_base_bdevs_discovered": 2, 00:17:48.507 "num_base_bdevs_operational": 2, 00:17:48.507 "base_bdevs_list": [ 00:17:48.507 { 00:17:48.507 "name": "spare", 00:17:48.507 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:48.507 "is_configured": true, 00:17:48.507 "data_offset": 256, 00:17:48.507 "data_size": 7936 00:17:48.507 }, 00:17:48.507 { 00:17:48.507 "name": "BaseBdev2", 00:17:48.507 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:48.507 "is_configured": true, 00:17:48.507 "data_offset": 256, 00:17:48.507 "data_size": 7936 00:17:48.507 } 00:17:48.507 ] 00:17:48.507 }' 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.507 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.508 "name": "raid_bdev1", 00:17:48.508 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:48.508 "strip_size_kb": 0, 00:17:48.508 "state": "online", 00:17:48.508 "raid_level": "raid1", 00:17:48.508 "superblock": true, 00:17:48.508 "num_base_bdevs": 2, 00:17:48.508 "num_base_bdevs_discovered": 2, 00:17:48.508 "num_base_bdevs_operational": 2, 00:17:48.508 "base_bdevs_list": [ 00:17:48.508 { 00:17:48.508 "name": "spare", 00:17:48.508 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:48.508 "is_configured": true, 00:17:48.508 "data_offset": 256, 00:17:48.508 "data_size": 7936 00:17:48.508 }, 00:17:48.508 { 00:17:48.508 "name": "BaseBdev2", 00:17:48.508 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:48.508 "is_configured": true, 00:17:48.508 "data_offset": 256, 00:17:48.508 "data_size": 7936 00:17:48.508 } 00:17:48.508 ] 00:17:48.508 }' 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.508 18:58:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.078 [2024-11-16 18:58:32.257392] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:49.078 [2024-11-16 18:58:32.257425] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:49.078 [2024-11-16 18:58:32.257499] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:49.078 [2024-11-16 18:58:32.257573] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:49.078 [2024-11-16 18:58:32.257589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.078 [2024-11-16 18:58:32.337250] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:49.078 [2024-11-16 18:58:32.337297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.078 [2024-11-16 18:58:32.337316] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:49.078 [2024-11-16 18:58:32.337325] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.078 [2024-11-16 18:58:32.339113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.078 [2024-11-16 18:58:32.339144] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:49.078 [2024-11-16 18:58:32.339191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:49.078 [2024-11-16 18:58:32.339243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.078 [2024-11-16 18:58:32.339346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:49.078 spare 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.078 [2024-11-16 18:58:32.439238] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:49.078 [2024-11-16 18:58:32.439271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:49.078 [2024-11-16 18:58:32.439352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:49.078 [2024-11-16 18:58:32.439421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:49.078 [2024-11-16 18:58:32.439428] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:49.078 [2024-11-16 18:58:32.439497] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.078 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.079 "name": "raid_bdev1", 00:17:49.079 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:49.079 "strip_size_kb": 0, 00:17:49.079 "state": "online", 00:17:49.079 "raid_level": "raid1", 00:17:49.079 "superblock": true, 00:17:49.079 "num_base_bdevs": 2, 00:17:49.079 "num_base_bdevs_discovered": 2, 00:17:49.079 "num_base_bdevs_operational": 2, 00:17:49.079 "base_bdevs_list": [ 00:17:49.079 { 00:17:49.079 "name": "spare", 00:17:49.079 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:49.079 "is_configured": true, 00:17:49.079 "data_offset": 256, 00:17:49.079 "data_size": 7936 00:17:49.079 }, 00:17:49.079 { 00:17:49.079 "name": "BaseBdev2", 00:17:49.079 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:49.079 "is_configured": true, 00:17:49.079 "data_offset": 256, 00:17:49.079 "data_size": 7936 00:17:49.079 } 00:17:49.079 ] 00:17:49.079 }' 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.079 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.649 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.649 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.649 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.649 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.649 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.650 "name": "raid_bdev1", 00:17:49.650 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:49.650 "strip_size_kb": 0, 00:17:49.650 "state": "online", 00:17:49.650 "raid_level": "raid1", 00:17:49.650 "superblock": true, 00:17:49.650 "num_base_bdevs": 2, 00:17:49.650 "num_base_bdevs_discovered": 2, 00:17:49.650 "num_base_bdevs_operational": 2, 00:17:49.650 "base_bdevs_list": [ 00:17:49.650 { 00:17:49.650 "name": "spare", 00:17:49.650 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:49.650 "is_configured": true, 00:17:49.650 "data_offset": 256, 00:17:49.650 "data_size": 7936 00:17:49.650 }, 00:17:49.650 { 00:17:49.650 "name": "BaseBdev2", 00:17:49.650 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:49.650 "is_configured": true, 00:17:49.650 "data_offset": 256, 00:17:49.650 "data_size": 7936 00:17:49.650 } 00:17:49.650 ] 00:17:49.650 }' 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.650 18:58:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.650 [2024-11-16 18:58:33.040176] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.650 "name": "raid_bdev1", 00:17:49.650 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:49.650 "strip_size_kb": 0, 00:17:49.650 "state": "online", 00:17:49.650 "raid_level": "raid1", 00:17:49.650 "superblock": true, 00:17:49.650 "num_base_bdevs": 2, 00:17:49.650 "num_base_bdevs_discovered": 1, 00:17:49.650 "num_base_bdevs_operational": 1, 00:17:49.650 "base_bdevs_list": [ 00:17:49.650 { 00:17:49.650 "name": null, 00:17:49.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.650 "is_configured": false, 00:17:49.650 "data_offset": 0, 00:17:49.650 "data_size": 7936 00:17:49.650 }, 00:17:49.650 { 00:17:49.650 "name": "BaseBdev2", 00:17:49.650 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:49.650 "is_configured": true, 00:17:49.650 "data_offset": 256, 00:17:49.650 "data_size": 7936 00:17:49.650 } 00:17:49.650 ] 00:17:49.650 }' 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.650 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:50.262 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.262 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:50.262 [2024-11-16 18:58:33.487465] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.262 [2024-11-16 18:58:33.487598] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:50.262 [2024-11-16 18:58:33.487614] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:50.262 [2024-11-16 18:58:33.487644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:50.262 [2024-11-16 18:58:33.502574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:50.262 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.262 18:58:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:50.262 [2024-11-16 18:58:33.504307] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.203 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.203 "name": "raid_bdev1", 00:17:51.203 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:51.203 "strip_size_kb": 0, 00:17:51.203 "state": "online", 00:17:51.203 "raid_level": "raid1", 00:17:51.203 "superblock": true, 00:17:51.203 "num_base_bdevs": 2, 00:17:51.203 "num_base_bdevs_discovered": 2, 00:17:51.203 "num_base_bdevs_operational": 2, 00:17:51.203 "process": { 00:17:51.203 "type": "rebuild", 00:17:51.203 "target": "spare", 00:17:51.203 "progress": { 00:17:51.203 "blocks": 2560, 00:17:51.203 "percent": 32 00:17:51.203 } 00:17:51.203 }, 00:17:51.204 "base_bdevs_list": [ 00:17:51.204 { 00:17:51.204 "name": "spare", 00:17:51.204 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:51.204 "is_configured": true, 00:17:51.204 "data_offset": 256, 00:17:51.204 "data_size": 7936 00:17:51.204 }, 00:17:51.204 { 00:17:51.204 "name": "BaseBdev2", 00:17:51.204 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:51.204 "is_configured": true, 00:17:51.204 "data_offset": 256, 00:17:51.204 "data_size": 7936 00:17:51.204 } 00:17:51.204 ] 00:17:51.204 }' 00:17:51.204 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.204 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.204 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.204 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.204 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:51.204 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.204 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.204 [2024-11-16 18:58:34.644379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.464 [2024-11-16 18:58:34.708734] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:51.464 [2024-11-16 18:58:34.708786] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.464 [2024-11-16 18:58:34.708799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:51.464 [2024-11-16 18:58:34.708807] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.464 "name": "raid_bdev1", 00:17:51.464 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:51.464 "strip_size_kb": 0, 00:17:51.464 "state": "online", 00:17:51.464 "raid_level": "raid1", 00:17:51.464 "superblock": true, 00:17:51.464 "num_base_bdevs": 2, 00:17:51.464 "num_base_bdevs_discovered": 1, 00:17:51.464 "num_base_bdevs_operational": 1, 00:17:51.464 "base_bdevs_list": [ 00:17:51.464 { 00:17:51.464 "name": null, 00:17:51.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:51.464 "is_configured": false, 00:17:51.464 "data_offset": 0, 00:17:51.464 "data_size": 7936 00:17:51.464 }, 00:17:51.464 { 00:17:51.464 "name": "BaseBdev2", 00:17:51.464 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:51.464 "is_configured": true, 00:17:51.464 "data_offset": 256, 00:17:51.464 "data_size": 7936 00:17:51.464 } 00:17:51.464 ] 00:17:51.464 }' 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.464 18:58:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.725 18:58:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:51.725 18:58:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.725 18:58:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.725 [2024-11-16 18:58:35.192863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:51.725 [2024-11-16 18:58:35.192914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:51.725 [2024-11-16 18:58:35.192935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:51.725 [2024-11-16 18:58:35.192946] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:51.725 [2024-11-16 18:58:35.193113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:51.725 [2024-11-16 18:58:35.193129] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:51.725 [2024-11-16 18:58:35.193172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:51.725 [2024-11-16 18:58:35.193184] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:51.725 [2024-11-16 18:58:35.193193] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:51.725 [2024-11-16 18:58:35.193218] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:51.985 [2024-11-16 18:58:35.207006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:51.985 spare 00:17:51.985 18:58:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.985 18:58:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:51.985 [2024-11-16 18:58:35.208733] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.926 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.926 "name": "raid_bdev1", 00:17:52.926 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:52.926 "strip_size_kb": 0, 00:17:52.926 "state": "online", 00:17:52.926 "raid_level": "raid1", 00:17:52.926 "superblock": true, 00:17:52.926 "num_base_bdevs": 2, 00:17:52.926 "num_base_bdevs_discovered": 2, 00:17:52.926 "num_base_bdevs_operational": 2, 00:17:52.926 "process": { 00:17:52.926 "type": "rebuild", 00:17:52.926 "target": "spare", 00:17:52.926 "progress": { 00:17:52.926 "blocks": 2560, 00:17:52.926 "percent": 32 00:17:52.926 } 00:17:52.926 }, 00:17:52.926 "base_bdevs_list": [ 00:17:52.926 { 00:17:52.926 "name": "spare", 00:17:52.926 "uuid": "a903b2d6-ab5f-5997-b3af-57bf8dc7d896", 00:17:52.926 "is_configured": true, 00:17:52.926 "data_offset": 256, 00:17:52.926 "data_size": 7936 00:17:52.926 }, 00:17:52.926 { 00:17:52.926 "name": "BaseBdev2", 00:17:52.926 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:52.926 "is_configured": true, 00:17:52.926 "data_offset": 256, 00:17:52.926 "data_size": 7936 00:17:52.926 } 00:17:52.926 ] 00:17:52.926 }' 00:17:52.927 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.927 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.927 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.927 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.927 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:52.927 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.927 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.927 [2024-11-16 18:58:36.372211] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.187 [2024-11-16 18:58:36.413047] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:53.187 [2024-11-16 18:58:36.413093] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:53.187 [2024-11-16 18:58:36.413109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:53.187 [2024-11-16 18:58:36.413115] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:53.187 "name": "raid_bdev1", 00:17:53.187 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:53.187 "strip_size_kb": 0, 00:17:53.187 "state": "online", 00:17:53.187 "raid_level": "raid1", 00:17:53.187 "superblock": true, 00:17:53.187 "num_base_bdevs": 2, 00:17:53.187 "num_base_bdevs_discovered": 1, 00:17:53.187 "num_base_bdevs_operational": 1, 00:17:53.187 "base_bdevs_list": [ 00:17:53.187 { 00:17:53.187 "name": null, 00:17:53.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.187 "is_configured": false, 00:17:53.187 "data_offset": 0, 00:17:53.187 "data_size": 7936 00:17:53.187 }, 00:17:53.187 { 00:17:53.187 "name": "BaseBdev2", 00:17:53.187 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:53.187 "is_configured": true, 00:17:53.187 "data_offset": 256, 00:17:53.187 "data_size": 7936 00:17:53.187 } 00:17:53.187 ] 00:17:53.187 }' 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:53.187 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.757 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.757 "name": "raid_bdev1", 00:17:53.757 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:53.757 "strip_size_kb": 0, 00:17:53.757 "state": "online", 00:17:53.758 "raid_level": "raid1", 00:17:53.758 "superblock": true, 00:17:53.758 "num_base_bdevs": 2, 00:17:53.758 "num_base_bdevs_discovered": 1, 00:17:53.758 "num_base_bdevs_operational": 1, 00:17:53.758 "base_bdevs_list": [ 00:17:53.758 { 00:17:53.758 "name": null, 00:17:53.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:53.758 "is_configured": false, 00:17:53.758 "data_offset": 0, 00:17:53.758 "data_size": 7936 00:17:53.758 }, 00:17:53.758 { 00:17:53.758 "name": "BaseBdev2", 00:17:53.758 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:53.758 "is_configured": true, 00:17:53.758 "data_offset": 256, 00:17:53.758 "data_size": 7936 00:17:53.758 } 00:17:53.758 ] 00:17:53.758 }' 00:17:53.758 18:58:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:53.758 [2024-11-16 18:58:37.076565] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:53.758 [2024-11-16 18:58:37.076613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.758 [2024-11-16 18:58:37.076636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:53.758 [2024-11-16 18:58:37.076646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.758 [2024-11-16 18:58:37.076798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.758 [2024-11-16 18:58:37.076811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.758 [2024-11-16 18:58:37.076855] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:53.758 [2024-11-16 18:58:37.076867] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:53.758 [2024-11-16 18:58:37.076877] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:53.758 [2024-11-16 18:58:37.076886] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:53.758 BaseBdev1 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.758 18:58:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:54.699 "name": "raid_bdev1", 00:17:54.699 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:54.699 "strip_size_kb": 0, 00:17:54.699 "state": "online", 00:17:54.699 "raid_level": "raid1", 00:17:54.699 "superblock": true, 00:17:54.699 "num_base_bdevs": 2, 00:17:54.699 "num_base_bdevs_discovered": 1, 00:17:54.699 "num_base_bdevs_operational": 1, 00:17:54.699 "base_bdevs_list": [ 00:17:54.699 { 00:17:54.699 "name": null, 00:17:54.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:54.699 "is_configured": false, 00:17:54.699 "data_offset": 0, 00:17:54.699 "data_size": 7936 00:17:54.699 }, 00:17:54.699 { 00:17:54.699 "name": "BaseBdev2", 00:17:54.699 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:54.699 "is_configured": true, 00:17:54.699 "data_offset": 256, 00:17:54.699 "data_size": 7936 00:17:54.699 } 00:17:54.699 ] 00:17:54.699 }' 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:54.699 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.269 "name": "raid_bdev1", 00:17:55.269 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:55.269 "strip_size_kb": 0, 00:17:55.269 "state": "online", 00:17:55.269 "raid_level": "raid1", 00:17:55.269 "superblock": true, 00:17:55.269 "num_base_bdevs": 2, 00:17:55.269 "num_base_bdevs_discovered": 1, 00:17:55.269 "num_base_bdevs_operational": 1, 00:17:55.269 "base_bdevs_list": [ 00:17:55.269 { 00:17:55.269 "name": null, 00:17:55.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.269 "is_configured": false, 00:17:55.269 "data_offset": 0, 00:17:55.269 "data_size": 7936 00:17:55.269 }, 00:17:55.269 { 00:17:55.269 "name": "BaseBdev2", 00:17:55.269 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:55.269 "is_configured": true, 00:17:55.269 "data_offset": 256, 00:17:55.269 "data_size": 7936 00:17:55.269 } 00:17:55.269 ] 00:17:55.269 }' 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.269 [2024-11-16 18:58:38.681865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:55.269 [2024-11-16 18:58:38.681971] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:55.269 [2024-11-16 18:58:38.681988] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:55.269 request: 00:17:55.269 { 00:17:55.269 "base_bdev": "BaseBdev1", 00:17:55.269 "raid_bdev": "raid_bdev1", 00:17:55.269 "method": "bdev_raid_add_base_bdev", 00:17:55.269 "req_id": 1 00:17:55.269 } 00:17:55.269 Got JSON-RPC error response 00:17:55.269 response: 00:17:55.269 { 00:17:55.269 "code": -22, 00:17:55.269 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:55.269 } 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.269 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.270 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.270 18:58:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.653 "name": "raid_bdev1", 00:17:56.653 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:56.653 "strip_size_kb": 0, 00:17:56.653 "state": "online", 00:17:56.653 "raid_level": "raid1", 00:17:56.653 "superblock": true, 00:17:56.653 "num_base_bdevs": 2, 00:17:56.653 "num_base_bdevs_discovered": 1, 00:17:56.653 "num_base_bdevs_operational": 1, 00:17:56.653 "base_bdevs_list": [ 00:17:56.653 { 00:17:56.653 "name": null, 00:17:56.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.653 "is_configured": false, 00:17:56.653 "data_offset": 0, 00:17:56.653 "data_size": 7936 00:17:56.653 }, 00:17:56.653 { 00:17:56.653 "name": "BaseBdev2", 00:17:56.653 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:56.653 "is_configured": true, 00:17:56.653 "data_offset": 256, 00:17:56.653 "data_size": 7936 00:17:56.653 } 00:17:56.653 ] 00:17:56.653 }' 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.653 18:58:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.653 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.653 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.653 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.653 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.653 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.914 "name": "raid_bdev1", 00:17:56.914 "uuid": "a5364f73-d390-4f7d-852f-d8cedcf067a7", 00:17:56.914 "strip_size_kb": 0, 00:17:56.914 "state": "online", 00:17:56.914 "raid_level": "raid1", 00:17:56.914 "superblock": true, 00:17:56.914 "num_base_bdevs": 2, 00:17:56.914 "num_base_bdevs_discovered": 1, 00:17:56.914 "num_base_bdevs_operational": 1, 00:17:56.914 "base_bdevs_list": [ 00:17:56.914 { 00:17:56.914 "name": null, 00:17:56.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.914 "is_configured": false, 00:17:56.914 "data_offset": 0, 00:17:56.914 "data_size": 7936 00:17:56.914 }, 00:17:56.914 { 00:17:56.914 "name": "BaseBdev2", 00:17:56.914 "uuid": "3ca2e194-8c0c-5156-800c-e18bf42a7ab6", 00:17:56.914 "is_configured": true, 00:17:56.914 "data_offset": 256, 00:17:56.914 "data_size": 7936 00:17:56.914 } 00:17:56.914 ] 00:17:56.914 }' 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88663 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88663 ']' 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88663 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88663 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:56.914 killing process with pid 88663 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88663' 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88663 00:17:56.914 Received shutdown signal, test time was about 60.000000 seconds 00:17:56.914 00:17:56.914 Latency(us) 00:17:56.914 [2024-11-16T18:58:40.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.914 [2024-11-16T18:58:40.386Z] =================================================================================================================== 00:17:56.914 [2024-11-16T18:58:40.386Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.914 [2024-11-16 18:58:40.279340] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.914 [2024-11-16 18:58:40.279430] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.914 [2024-11-16 18:58:40.279476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.914 [2024-11-16 18:58:40.279490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:56.914 18:58:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88663 00:17:57.175 [2024-11-16 18:58:40.556453] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:58.116 18:58:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:17:58.116 00:17:58.116 real 0m17.286s 00:17:58.116 user 0m22.620s 00:17:58.116 sys 0m1.659s 00:17:58.116 18:58:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.116 18:58:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:58.116 ************************************ 00:17:58.116 END TEST raid_rebuild_test_sb_md_interleaved 00:17:58.116 ************************************ 00:17:58.376 18:58:41 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:17:58.376 18:58:41 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:17:58.376 18:58:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88663 ']' 00:17:58.376 18:58:41 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88663 00:17:58.376 18:58:41 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:17:58.376 00:17:58.376 real 11m33.627s 00:17:58.376 user 15m37.179s 00:17:58.376 sys 1m47.049s 00:17:58.376 18:58:41 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:58.376 18:58:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.376 ************************************ 00:17:58.376 END TEST bdev_raid 00:17:58.376 ************************************ 00:17:58.376 18:58:41 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:58.376 18:58:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:58.376 18:58:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:58.376 18:58:41 -- common/autotest_common.sh@10 -- # set +x 00:17:58.376 ************************************ 00:17:58.376 START TEST spdkcli_raid 00:17:58.376 ************************************ 00:17:58.376 18:58:41 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:58.637 * Looking for test storage... 00:17:58.637 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:58.637 18:58:41 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:58.637 18:58:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:58.637 18:58:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:58.637 18:58:41 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.637 18:58:41 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:17:58.637 18:58:41 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.637 18:58:41 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:58.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.637 --rc genhtml_branch_coverage=1 00:17:58.637 --rc genhtml_function_coverage=1 00:17:58.637 --rc genhtml_legend=1 00:17:58.637 --rc geninfo_all_blocks=1 00:17:58.637 --rc geninfo_unexecuted_blocks=1 00:17:58.637 00:17:58.637 ' 00:17:58.637 18:58:41 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:58.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.637 --rc genhtml_branch_coverage=1 00:17:58.637 --rc genhtml_function_coverage=1 00:17:58.637 --rc genhtml_legend=1 00:17:58.637 --rc geninfo_all_blocks=1 00:17:58.637 --rc geninfo_unexecuted_blocks=1 00:17:58.637 00:17:58.637 ' 00:17:58.637 18:58:41 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:58.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.637 --rc genhtml_branch_coverage=1 00:17:58.637 --rc genhtml_function_coverage=1 00:17:58.637 --rc genhtml_legend=1 00:17:58.637 --rc geninfo_all_blocks=1 00:17:58.637 --rc geninfo_unexecuted_blocks=1 00:17:58.637 00:17:58.637 ' 00:17:58.637 18:58:41 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:58.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.637 --rc genhtml_branch_coverage=1 00:17:58.637 --rc genhtml_function_coverage=1 00:17:58.637 --rc genhtml_legend=1 00:17:58.637 --rc geninfo_all_blocks=1 00:17:58.637 --rc geninfo_unexecuted_blocks=1 00:17:58.637 00:17:58.637 ' 00:17:58.637 18:58:41 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:58.637 18:58:41 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:58.637 18:58:41 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:58.637 18:58:41 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:17:58.637 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:17:58.638 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:17:58.638 18:58:41 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:17:58.638 18:58:41 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:17:58.638 18:58:41 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:58.638 18:58:41 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.638 18:58:42 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:17:58.638 18:58:42 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89340 00:17:58.638 18:58:42 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:17:58.638 18:58:42 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89340 00:17:58.638 18:58:42 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89340 ']' 00:17:58.638 18:58:42 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.638 18:58:42 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.638 18:58:42 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.638 18:58:42 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.638 18:58:42 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:58.898 [2024-11-16 18:58:42.114700] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:58.898 [2024-11-16 18:58:42.114817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89340 ] 00:17:58.898 [2024-11-16 18:58:42.292327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:59.157 [2024-11-16 18:58:42.402495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.157 [2024-11-16 18:58:42.402533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.728 18:58:43 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.728 18:58:43 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:17:59.728 18:58:43 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:17:59.728 18:58:43 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.728 18:58:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.988 18:58:43 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:17:59.988 18:58:43 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.988 18:58:43 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.988 18:58:43 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:17:59.988 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:17:59.988 ' 00:18:01.369 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:01.369 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:01.628 18:58:44 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:01.628 18:58:44 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:01.628 18:58:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.628 18:58:44 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:01.628 18:58:44 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:01.628 18:58:44 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:01.628 18:58:44 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:01.628 ' 00:18:02.567 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:02.826 18:58:46 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:02.826 18:58:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:02.826 18:58:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.826 18:58:46 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:02.826 18:58:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.826 18:58:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:02.826 18:58:46 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:02.826 18:58:46 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:03.395 18:58:46 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:03.395 18:58:46 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:03.395 18:58:46 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:03.395 18:58:46 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.395 18:58:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.395 18:58:46 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:03.395 18:58:46 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.395 18:58:46 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:03.395 18:58:46 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:03.395 ' 00:18:04.334 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:04.334 18:58:47 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:04.334 18:58:47 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:04.334 18:58:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.594 18:58:47 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:04.594 18:58:47 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:04.594 18:58:47 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:04.594 18:58:47 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:04.594 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:04.594 ' 00:18:05.975 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:05.975 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:05.975 18:58:49 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:05.975 18:58:49 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89340 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89340 ']' 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89340 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89340 00:18:05.975 killing process with pid 89340 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89340' 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89340 00:18:05.975 18:58:49 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89340 00:18:08.517 18:58:51 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:08.517 18:58:51 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89340 ']' 00:18:08.517 18:58:51 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89340 00:18:08.517 18:58:51 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89340 ']' 00:18:08.517 18:58:51 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89340 00:18:08.517 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89340) - No such process 00:18:08.517 Process with pid 89340 is not found 00:18:08.517 18:58:51 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89340 is not found' 00:18:08.517 18:58:51 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:08.517 18:58:51 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:08.517 18:58:51 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:08.517 18:58:51 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:08.517 00:18:08.517 real 0m10.178s 00:18:08.517 user 0m20.898s 00:18:08.517 sys 0m1.155s 00:18:08.517 18:58:51 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.517 18:58:51 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.517 ************************************ 00:18:08.517 END TEST spdkcli_raid 00:18:08.517 ************************************ 00:18:08.517 18:58:51 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:08.517 18:58:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:08.517 18:58:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.517 18:58:51 -- common/autotest_common.sh@10 -- # set +x 00:18:08.778 ************************************ 00:18:08.778 START TEST blockdev_raid5f 00:18:08.778 ************************************ 00:18:08.778 18:58:51 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:08.778 * Looking for test storage... 00:18:08.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:08.778 18:58:52 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:08.778 18:58:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:18:08.778 18:58:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:08.778 18:58:52 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:08.778 18:58:52 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:08.778 18:58:52 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:08.778 18:58:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.778 --rc genhtml_branch_coverage=1 00:18:08.778 --rc genhtml_function_coverage=1 00:18:08.778 --rc genhtml_legend=1 00:18:08.778 --rc geninfo_all_blocks=1 00:18:08.778 --rc geninfo_unexecuted_blocks=1 00:18:08.778 00:18:08.778 ' 00:18:08.778 18:58:52 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.778 --rc genhtml_branch_coverage=1 00:18:08.778 --rc genhtml_function_coverage=1 00:18:08.778 --rc genhtml_legend=1 00:18:08.778 --rc geninfo_all_blocks=1 00:18:08.778 --rc geninfo_unexecuted_blocks=1 00:18:08.778 00:18:08.778 ' 00:18:08.778 18:58:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.778 --rc genhtml_branch_coverage=1 00:18:08.778 --rc genhtml_function_coverage=1 00:18:08.778 --rc genhtml_legend=1 00:18:08.778 --rc geninfo_all_blocks=1 00:18:08.778 --rc geninfo_unexecuted_blocks=1 00:18:08.778 00:18:08.778 ' 00:18:08.778 18:58:52 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:08.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:08.778 --rc genhtml_branch_coverage=1 00:18:08.778 --rc genhtml_function_coverage=1 00:18:08.778 --rc genhtml_legend=1 00:18:08.778 --rc geninfo_all_blocks=1 00:18:08.779 --rc geninfo_unexecuted_blocks=1 00:18:08.779 00:18:08.779 ' 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89621 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89621 00:18:08.779 18:58:52 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:08.779 18:58:52 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89621 ']' 00:18:08.779 18:58:52 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.779 18:58:52 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.779 18:58:52 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.779 18:58:52 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.779 18:58:52 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:09.039 [2024-11-16 18:58:52.339413] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:09.039 [2024-11-16 18:58:52.339518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89621 ] 00:18:09.039 [2024-11-16 18:58:52.510448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.299 [2024-11-16 18:58:52.642900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.260 18:58:53 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.260 18:58:53 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:18:10.260 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:18:10.260 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:18:10.260 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:18:10.260 18:58:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.260 18:58:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.260 Malloc0 00:18:10.260 Malloc1 00:18:10.521 Malloc2 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bbf22484-0088-4bed-918b-b4e05deb05bf"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bbf22484-0088-4bed-918b-b4e05deb05bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bbf22484-0088-4bed-918b-b4e05deb05bf",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9b71a1be-aaa6-40ba-a68e-34d7c50485b4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9ccba98d-1047-41c9-9f9a-945d9f889ef8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "27e4b250-1907-40cd-b4c7-14725aa37318",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:18:10.521 18:58:53 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 89621 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89621 ']' 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89621 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89621 00:18:10.521 killing process with pid 89621 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89621' 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89621 00:18:10.521 18:58:53 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89621 00:18:13.834 18:58:56 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:13.834 18:58:56 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:13.834 18:58:56 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:13.834 18:58:56 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.834 18:58:56 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:13.834 ************************************ 00:18:13.834 START TEST bdev_hello_world 00:18:13.834 ************************************ 00:18:13.834 18:58:56 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:18:13.834 [2024-11-16 18:58:56.858954] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:13.834 [2024-11-16 18:58:56.859082] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89694 ] 00:18:13.834 [2024-11-16 18:58:57.034714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.834 [2024-11-16 18:58:57.163394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.439 [2024-11-16 18:58:57.767584] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:14.439 [2024-11-16 18:58:57.767638] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:18:14.439 [2024-11-16 18:58:57.767667] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:14.439 [2024-11-16 18:58:57.768207] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:14.439 [2024-11-16 18:58:57.768352] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:14.439 [2024-11-16 18:58:57.768368] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:14.439 [2024-11-16 18:58:57.768415] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:14.439 00:18:14.439 [2024-11-16 18:58:57.768432] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:15.821 00:18:15.821 real 0m2.420s 00:18:15.821 user 0m1.972s 00:18:15.821 sys 0m0.328s 00:18:15.821 ************************************ 00:18:15.821 END TEST bdev_hello_world 00:18:15.821 ************************************ 00:18:15.821 18:58:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:15.821 18:58:59 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:15.821 18:58:59 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:18:15.821 18:58:59 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:15.821 18:58:59 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:15.821 18:58:59 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:15.821 ************************************ 00:18:15.821 START TEST bdev_bounds 00:18:15.821 ************************************ 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89736 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:15.821 Process bdevio pid: 89736 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89736' 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89736 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89736 ']' 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.821 18:58:59 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:16.083 [2024-11-16 18:58:59.357633] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:16.083 [2024-11-16 18:58:59.357791] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89736 ] 00:18:16.083 [2024-11-16 18:58:59.538010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:16.343 [2024-11-16 18:58:59.674411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.343 [2024-11-16 18:58:59.674695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.343 [2024-11-16 18:58:59.674704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.914 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.914 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:18:16.914 18:59:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:16.914 I/O targets: 00:18:16.914 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:18:16.914 00:18:16.914 00:18:16.914 CUnit - A unit testing framework for C - Version 2.1-3 00:18:16.914 http://cunit.sourceforge.net/ 00:18:16.914 00:18:16.914 00:18:16.914 Suite: bdevio tests on: raid5f 00:18:16.914 Test: blockdev write read block ...passed 00:18:17.174 Test: blockdev write zeroes read block ...passed 00:18:17.174 Test: blockdev write zeroes read no split ...passed 00:18:17.174 Test: blockdev write zeroes read split ...passed 00:18:17.174 Test: blockdev write zeroes read split partial ...passed 00:18:17.174 Test: blockdev reset ...passed 00:18:17.174 Test: blockdev write read 8 blocks ...passed 00:18:17.174 Test: blockdev write read size > 128k ...passed 00:18:17.174 Test: blockdev write read invalid size ...passed 00:18:17.174 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:17.174 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:17.174 Test: blockdev write read max offset ...passed 00:18:17.174 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:17.174 Test: blockdev writev readv 8 blocks ...passed 00:18:17.174 Test: blockdev writev readv 30 x 1block ...passed 00:18:17.174 Test: blockdev writev readv block ...passed 00:18:17.174 Test: blockdev writev readv size > 128k ...passed 00:18:17.174 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:17.174 Test: blockdev comparev and writev ...passed 00:18:17.174 Test: blockdev nvme passthru rw ...passed 00:18:17.174 Test: blockdev nvme passthru vendor specific ...passed 00:18:17.174 Test: blockdev nvme admin passthru ...passed 00:18:17.174 Test: blockdev copy ...passed 00:18:17.174 00:18:17.174 Run Summary: Type Total Ran Passed Failed Inactive 00:18:17.174 suites 1 1 n/a 0 0 00:18:17.174 tests 23 23 23 0 0 00:18:17.174 asserts 130 130 130 0 n/a 00:18:17.174 00:18:17.174 Elapsed time = 0.613 seconds 00:18:17.174 0 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89736 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89736 ']' 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89736 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89736 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89736' 00:18:17.434 killing process with pid 89736 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89736 00:18:17.434 18:59:00 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89736 00:18:18.816 18:59:02 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:18:18.816 00:18:18.816 real 0m2.923s 00:18:18.816 user 0m7.158s 00:18:18.816 sys 0m0.486s 00:18:18.816 18:59:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.816 18:59:02 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:18.816 ************************************ 00:18:18.816 END TEST bdev_bounds 00:18:18.816 ************************************ 00:18:18.816 18:59:02 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:18.816 18:59:02 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:18.816 18:59:02 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.816 18:59:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:18.816 ************************************ 00:18:18.816 START TEST bdev_nbd 00:18:18.816 ************************************ 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89801 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89801 /var/tmp/spdk-nbd.sock 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89801 ']' 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:18.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.816 18:59:02 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:19.076 [2024-11-16 18:59:02.355574] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:19.076 [2024-11-16 18:59:02.355776] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.076 [2024-11-16 18:59:02.531197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.336 [2024-11-16 18:59:02.664646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:19.906 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:20.166 1+0 records in 00:18:20.166 1+0 records out 00:18:20.166 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383105 s, 10.7 MB/s 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:18:20.166 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:18:20.426 { 00:18:20.426 "nbd_device": "/dev/nbd0", 00:18:20.426 "bdev_name": "raid5f" 00:18:20.426 } 00:18:20.426 ]' 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:18:20.426 { 00:18:20.426 "nbd_device": "/dev/nbd0", 00:18:20.426 "bdev_name": "raid5f" 00:18:20.426 } 00:18:20.426 ]' 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:20.426 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:20.687 18:59:03 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.687 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:20.947 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:18:21.207 /dev/nbd0 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.207 1+0 records in 00:18:21.207 1+0 records out 00:18:21.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0015388 s, 2.7 MB/s 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.207 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:18:21.467 { 00:18:21.467 "nbd_device": "/dev/nbd0", 00:18:21.467 "bdev_name": "raid5f" 00:18:21.467 } 00:18:21.467 ]' 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:18:21.467 { 00:18:21.467 "nbd_device": "/dev/nbd0", 00:18:21.467 "bdev_name": "raid5f" 00:18:21.467 } 00:18:21.467 ]' 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:21.467 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:18:21.468 256+0 records in 00:18:21.468 256+0 records out 00:18:21.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140844 s, 74.4 MB/s 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:18:21.468 256+0 records in 00:18:21.468 256+0 records out 00:18:21.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029317 s, 35.8 MB/s 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.468 18:59:04 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.728 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:18:21.988 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:18:22.247 malloc_lvol_verify 00:18:22.247 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:18:22.507 597b723f-d820-414c-8161-83bc7dcb71e3 00:18:22.507 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:18:22.507 6c783cf1-ec6a-4f7d-b9b9-6790d3fd8a20 00:18:22.507 18:59:05 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:18:22.767 /dev/nbd0 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:18:22.767 mke2fs 1.47.0 (5-Feb-2023) 00:18:22.767 Discarding device blocks: 0/4096 done 00:18:22.767 Creating filesystem with 4096 1k blocks and 1024 inodes 00:18:22.767 00:18:22.767 Allocating group tables: 0/1 done 00:18:22.767 Writing inode tables: 0/1 done 00:18:22.767 Creating journal (1024 blocks): done 00:18:22.767 Writing superblocks and filesystem accounting information: 0/1 done 00:18:22.767 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:22.767 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89801 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89801 ']' 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89801 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89801 00:18:23.027 killing process with pid 89801 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89801' 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89801 00:18:23.027 18:59:06 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89801 00:18:24.408 ************************************ 00:18:24.408 END TEST bdev_nbd 00:18:24.408 ************************************ 00:18:24.408 18:59:07 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:18:24.408 00:18:24.408 real 0m5.508s 00:18:24.408 user 0m7.336s 00:18:24.408 sys 0m1.336s 00:18:24.408 18:59:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:24.408 18:59:07 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:24.408 18:59:07 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:18:24.408 18:59:07 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:18:24.408 18:59:07 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:18:24.408 18:59:07 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:18:24.408 18:59:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:24.408 18:59:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.408 18:59:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:24.408 ************************************ 00:18:24.408 START TEST bdev_fio 00:18:24.408 ************************************ 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:18:24.408 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:18:24.408 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:18:24.409 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:24.669 ************************************ 00:18:24.669 START TEST bdev_fio_rw_verify 00:18:24.669 ************************************ 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:18:24.669 18:59:07 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:24.669 18:59:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:24.669 18:59:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:24.669 18:59:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:18:24.669 18:59:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:24.669 18:59:08 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:18:24.929 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:18:24.929 fio-3.35 00:18:24.929 Starting 1 thread 00:18:37.153 00:18:37.153 job_raid5f: (groupid=0, jobs=1): err= 0: pid=89998: Sat Nov 16 18:59:19 2024 00:18:37.153 read: IOPS=12.7k, BW=49.7MiB/s (52.2MB/s)(497MiB/10001msec) 00:18:37.153 slat (nsec): min=16689, max=92209, avg=18493.51, stdev=1695.15 00:18:37.153 clat (usec): min=10, max=278, avg=125.85, stdev=43.56 00:18:37.153 lat (usec): min=28, max=297, avg=144.34, stdev=43.71 00:18:37.153 clat percentiles (usec): 00:18:37.153 | 50.000th=[ 130], 99.000th=[ 206], 99.900th=[ 225], 99.990th=[ 255], 00:18:37.153 | 99.999th=[ 273] 00:18:37.153 write: IOPS=13.3k, BW=52.0MiB/s (54.5MB/s)(514MiB/9880msec); 0 zone resets 00:18:37.153 slat (usec): min=7, max=222, avg=15.74, stdev= 3.50 00:18:37.153 clat (usec): min=57, max=1486, avg=290.57, stdev=38.93 00:18:37.153 lat (usec): min=71, max=1700, avg=306.32, stdev=39.90 00:18:37.153 clat percentiles (usec): 00:18:37.153 | 50.000th=[ 297], 99.000th=[ 363], 99.900th=[ 586], 99.990th=[ 1106], 00:18:37.153 | 99.999th=[ 1401] 00:18:37.153 bw ( KiB/s): min=50576, max=55032, per=98.99%, avg=52683.79, stdev=1479.78, samples=19 00:18:37.153 iops : min=12644, max=13758, avg=13170.95, stdev=369.94, samples=19 00:18:37.153 lat (usec) : 20=0.01%, 50=0.01%, 100=17.10%, 250=39.26%, 500=43.57% 00:18:37.153 lat (usec) : 750=0.04%, 1000=0.02% 00:18:37.153 lat (msec) : 2=0.01% 00:18:37.153 cpu : usr=98.89%, sys=0.42%, ctx=72, majf=0, minf=10364 00:18:37.153 IO depths : 1=7.6%, 2=19.9%, 4=55.1%, 8=17.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:37.153 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.153 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.153 issued rwts: total=127359,131461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.153 latency : target=0, window=0, percentile=100.00%, depth=8 00:18:37.153 00:18:37.153 Run status group 0 (all jobs): 00:18:37.153 READ: bw=49.7MiB/s (52.2MB/s), 49.7MiB/s-49.7MiB/s (52.2MB/s-52.2MB/s), io=497MiB (522MB), run=10001-10001msec 00:18:37.153 WRITE: bw=52.0MiB/s (54.5MB/s), 52.0MiB/s-52.0MiB/s (54.5MB/s-54.5MB/s), io=514MiB (538MB), run=9880-9880msec 00:18:37.153 ----------------------------------------------------- 00:18:37.153 Suppressions used: 00:18:37.153 count bytes template 00:18:37.153 1 7 /usr/src/fio/parse.c 00:18:37.153 159 15264 /usr/src/fio/iolog.c 00:18:37.153 1 8 libtcmalloc_minimal.so 00:18:37.153 1 904 libcrypto.so 00:18:37.153 ----------------------------------------------------- 00:18:37.153 00:18:37.414 00:18:37.414 real 0m12.654s 00:18:37.414 user 0m12.999s 00:18:37.414 sys 0m0.649s 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.414 ************************************ 00:18:37.414 END TEST bdev_fio_rw_verify 00:18:37.414 ************************************ 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "bbf22484-0088-4bed-918b-b4e05deb05bf"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "bbf22484-0088-4bed-918b-b4e05deb05bf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "bbf22484-0088-4bed-918b-b4e05deb05bf",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "9b71a1be-aaa6-40ba-a68e-34d7c50485b4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "9ccba98d-1047-41c9-9f9a-945d9f889ef8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "27e4b250-1907-40cd-b4c7-14725aa37318",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:18:37.414 /home/vagrant/spdk_repo/spdk 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:18:37.414 00:18:37.414 real 0m12.938s 00:18:37.414 user 0m13.111s 00:18:37.414 sys 0m0.790s 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:37.414 18:59:20 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:18:37.414 ************************************ 00:18:37.414 END TEST bdev_fio 00:18:37.414 ************************************ 00:18:37.414 18:59:20 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:37.414 18:59:20 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:37.414 18:59:20 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:37.414 18:59:20 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:37.414 18:59:20 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:37.414 ************************************ 00:18:37.414 START TEST bdev_verify 00:18:37.414 ************************************ 00:18:37.414 18:59:20 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:37.675 [2024-11-16 18:59:20.956167] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:37.675 [2024-11-16 18:59:20.956362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90167 ] 00:18:37.675 [2024-11-16 18:59:21.133081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:37.935 [2024-11-16 18:59:21.243247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.935 [2024-11-16 18:59:21.243284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.506 Running I/O for 5 seconds... 00:18:40.385 10949.00 IOPS, 42.77 MiB/s [2024-11-16T18:59:24.798Z] 10846.00 IOPS, 42.37 MiB/s [2024-11-16T18:59:26.183Z] 10881.67 IOPS, 42.51 MiB/s [2024-11-16T18:59:26.796Z] 10871.25 IOPS, 42.47 MiB/s [2024-11-16T18:59:26.796Z] 10824.60 IOPS, 42.28 MiB/s 00:18:43.324 Latency(us) 00:18:43.324 [2024-11-16T18:59:26.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.324 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:43.324 Verification LBA range: start 0x0 length 0x2000 00:18:43.324 raid5f : 5.02 6412.11 25.05 0.00 0.00 30041.66 219.11 32968.33 00:18:43.324 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:43.324 Verification LBA range: start 0x2000 length 0x2000 00:18:43.324 raid5f : 5.03 4405.47 17.21 0.00 0.00 43634.33 131.47 32052.54 00:18:43.324 [2024-11-16T18:59:26.796Z] =================================================================================================================== 00:18:43.324 [2024-11-16T18:59:26.796Z] Total : 10817.58 42.26 0.00 0.00 35580.69 131.47 32968.33 00:18:45.247 ************************************ 00:18:45.247 END TEST bdev_verify 00:18:45.247 ************************************ 00:18:45.247 00:18:45.247 real 0m7.365s 00:18:45.247 user 0m13.609s 00:18:45.247 sys 0m0.278s 00:18:45.247 18:59:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:45.247 18:59:28 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:45.247 18:59:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:45.247 18:59:28 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:18:45.247 18:59:28 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:45.247 18:59:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:45.247 ************************************ 00:18:45.247 START TEST bdev_verify_big_io 00:18:45.247 ************************************ 00:18:45.247 18:59:28 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:45.247 [2024-11-16 18:59:28.395661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:45.247 [2024-11-16 18:59:28.395786] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90260 ] 00:18:45.247 [2024-11-16 18:59:28.573890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:45.247 [2024-11-16 18:59:28.710011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.247 [2024-11-16 18:59:28.710039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.187 Running I/O for 5 seconds... 00:18:48.066 633.00 IOPS, 39.56 MiB/s [2024-11-16T18:59:32.478Z] 760.00 IOPS, 47.50 MiB/s [2024-11-16T18:59:33.418Z] 760.67 IOPS, 47.54 MiB/s [2024-11-16T18:59:34.800Z] 776.50 IOPS, 48.53 MiB/s [2024-11-16T18:59:34.800Z] 774.00 IOPS, 48.38 MiB/s 00:18:51.328 Latency(us) 00:18:51.328 [2024-11-16T18:59:34.800Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.328 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:51.328 Verification LBA range: start 0x0 length 0x200 00:18:51.328 raid5f : 5.20 439.79 27.49 0.00 0.00 7292438.81 329.11 320525.41 00:18:51.328 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:51.328 Verification LBA range: start 0x200 length 0x200 00:18:51.328 raid5f : 5.21 341.28 21.33 0.00 0.00 9302046.00 203.01 391956.79 00:18:51.328 [2024-11-16T18:59:34.800Z] =================================================================================================================== 00:18:51.328 [2024-11-16T18:59:34.800Z] Total : 781.07 48.82 0.00 0.00 8171641.96 203.01 391956.79 00:18:52.711 00:18:52.711 real 0m7.701s 00:18:52.711 user 0m14.146s 00:18:52.711 sys 0m0.371s 00:18:52.711 18:59:35 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:52.711 18:59:35 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:52.711 ************************************ 00:18:52.711 END TEST bdev_verify_big_io 00:18:52.711 ************************************ 00:18:52.711 18:59:36 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:52.711 18:59:36 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:52.711 18:59:36 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:52.711 18:59:36 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:52.711 ************************************ 00:18:52.711 START TEST bdev_write_zeroes 00:18:52.711 ************************************ 00:18:52.711 18:59:36 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:52.711 [2024-11-16 18:59:36.173835] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:52.711 [2024-11-16 18:59:36.173987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90364 ] 00:18:52.970 [2024-11-16 18:59:36.351199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.230 [2024-11-16 18:59:36.489489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.800 Running I/O for 1 seconds... 00:18:54.739 29799.00 IOPS, 116.40 MiB/s 00:18:54.739 Latency(us) 00:18:54.739 [2024-11-16T18:59:38.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.739 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:54.739 raid5f : 1.01 29772.93 116.30 0.00 0.00 4286.65 1345.06 5809.52 00:18:54.739 [2024-11-16T18:59:38.211Z] =================================================================================================================== 00:18:54.739 [2024-11-16T18:59:38.211Z] Total : 29772.93 116.30 0.00 0.00 4286.65 1345.06 5809.52 00:18:56.122 00:18:56.122 real 0m3.434s 00:18:56.122 user 0m2.953s 00:18:56.122 sys 0m0.352s 00:18:56.122 18:59:39 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.122 18:59:39 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:56.122 ************************************ 00:18:56.122 END TEST bdev_write_zeroes 00:18:56.122 ************************************ 00:18:56.122 18:59:39 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:56.122 18:59:39 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:56.122 18:59:39 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.122 18:59:39 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:56.122 ************************************ 00:18:56.122 START TEST bdev_json_nonenclosed 00:18:56.122 ************************************ 00:18:56.122 18:59:39 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:56.382 [2024-11-16 18:59:39.675074] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:56.382 [2024-11-16 18:59:39.675250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90417 ] 00:18:56.642 [2024-11-16 18:59:39.855514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.642 [2024-11-16 18:59:39.994234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.642 [2024-11-16 18:59:39.994395] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:56.642 [2024-11-16 18:59:39.994471] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:56.642 [2024-11-16 18:59:39.994501] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:56.902 00:18:56.902 real 0m0.657s 00:18:56.902 user 0m0.414s 00:18:56.902 sys 0m0.138s 00:18:56.902 18:59:40 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:56.902 18:59:40 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:56.902 ************************************ 00:18:56.902 END TEST bdev_json_nonenclosed 00:18:56.902 ************************************ 00:18:56.902 18:59:40 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:56.902 18:59:40 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:56.902 18:59:40 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:56.902 18:59:40 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:56.902 ************************************ 00:18:56.902 START TEST bdev_json_nonarray 00:18:56.902 ************************************ 00:18:56.902 18:59:40 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:57.162 [2024-11-16 18:59:40.410145] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:57.162 [2024-11-16 18:59:40.410279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90448 ] 00:18:57.162 [2024-11-16 18:59:40.586457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:57.422 [2024-11-16 18:59:40.718422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.422 [2024-11-16 18:59:40.718615] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:57.422 [2024-11-16 18:59:40.718686] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:57.422 [2024-11-16 18:59:40.718711] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:57.682 ************************************ 00:18:57.682 00:18:57.682 real 0m0.668s 00:18:57.682 user 0m0.411s 00:18:57.682 sys 0m0.151s 00:18:57.682 18:59:40 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.682 18:59:40 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:57.682 END TEST bdev_json_nonarray 00:18:57.682 ************************************ 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:18:57.682 18:59:41 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:18:57.682 00:18:57.682 real 0m49.065s 00:18:57.682 user 1m5.744s 00:18:57.682 sys 0m5.527s 00:18:57.682 18:59:41 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:57.682 ************************************ 00:18:57.682 END TEST blockdev_raid5f 00:18:57.682 ************************************ 00:18:57.682 18:59:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:57.682 18:59:41 -- spdk/autotest.sh@194 -- # uname -s 00:18:57.682 18:59:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:18:57.683 18:59:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:57.683 18:59:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:18:57.683 18:59:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:18:57.683 18:59:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:18:57.683 18:59:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:18:57.683 18:59:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:57.683 18:59:41 -- common/autotest_common.sh@10 -- # set +x 00:18:57.943 18:59:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:18:57.943 18:59:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:18:57.943 18:59:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:18:57.943 18:59:41 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:18:57.943 18:59:41 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:18:57.943 18:59:41 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:18:57.943 18:59:41 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:18:57.943 18:59:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:57.943 18:59:41 -- common/autotest_common.sh@10 -- # set +x 00:18:57.943 18:59:41 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:18:57.943 18:59:41 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:18:57.943 18:59:41 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:18:57.943 18:59:41 -- common/autotest_common.sh@10 -- # set +x 00:19:00.486 INFO: APP EXITING 00:19:00.486 INFO: killing all VMs 00:19:00.486 INFO: killing vhost app 00:19:00.486 INFO: EXIT DONE 00:19:00.746 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:00.746 Waiting for block devices as requested 00:19:00.746 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:00.746 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:01.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:01.687 Cleaning 00:19:01.687 Removing: /var/run/dpdk/spdk0/config 00:19:01.687 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:01.947 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:01.947 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:01.947 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:01.947 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:01.947 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:01.947 Removing: /dev/shm/spdk_tgt_trace.pid56894 00:19:01.947 Removing: /var/run/dpdk/spdk0 00:19:01.947 Removing: /var/run/dpdk/spdk_pid56653 00:19:01.947 Removing: /var/run/dpdk/spdk_pid56894 00:19:01.947 Removing: /var/run/dpdk/spdk_pid57123 00:19:01.947 Removing: /var/run/dpdk/spdk_pid57227 00:19:01.947 Removing: /var/run/dpdk/spdk_pid57272 00:19:01.947 Removing: /var/run/dpdk/spdk_pid57411 00:19:01.947 Removing: /var/run/dpdk/spdk_pid57429 00:19:01.947 Removing: /var/run/dpdk/spdk_pid57639 00:19:01.947 Removing: /var/run/dpdk/spdk_pid57745 00:19:01.947 Removing: /var/run/dpdk/spdk_pid57852 00:19:01.947 Removing: /var/run/dpdk/spdk_pid57974 00:19:01.947 Removing: /var/run/dpdk/spdk_pid58077 00:19:01.947 Removing: /var/run/dpdk/spdk_pid58116 00:19:01.947 Removing: /var/run/dpdk/spdk_pid58153 00:19:01.947 Removing: /var/run/dpdk/spdk_pid58229 00:19:01.947 Removing: /var/run/dpdk/spdk_pid58350 00:19:01.947 Removing: /var/run/dpdk/spdk_pid58793 00:19:01.947 Removing: /var/run/dpdk/spdk_pid58863 00:19:01.947 Removing: /var/run/dpdk/spdk_pid58931 00:19:01.947 Removing: /var/run/dpdk/spdk_pid58953 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59092 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59108 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59257 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59275 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59339 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59359 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59429 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59447 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59642 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59678 00:19:01.947 Removing: /var/run/dpdk/spdk_pid59766 00:19:01.947 Removing: /var/run/dpdk/spdk_pid61093 00:19:01.947 Removing: /var/run/dpdk/spdk_pid61299 00:19:01.947 Removing: /var/run/dpdk/spdk_pid61439 00:19:01.947 Removing: /var/run/dpdk/spdk_pid62077 00:19:01.947 Removing: /var/run/dpdk/spdk_pid62283 00:19:01.947 Removing: /var/run/dpdk/spdk_pid62423 00:19:01.947 Removing: /var/run/dpdk/spdk_pid63056 00:19:01.947 Removing: /var/run/dpdk/spdk_pid63380 00:19:01.947 Removing: /var/run/dpdk/spdk_pid63520 00:19:01.947 Removing: /var/run/dpdk/spdk_pid64894 00:19:01.947 Removing: /var/run/dpdk/spdk_pid65147 00:19:01.947 Removing: /var/run/dpdk/spdk_pid65293 00:19:01.947 Removing: /var/run/dpdk/spdk_pid66667 00:19:01.947 Removing: /var/run/dpdk/spdk_pid66920 00:19:02.207 Removing: /var/run/dpdk/spdk_pid67066 00:19:02.207 Removing: /var/run/dpdk/spdk_pid68446 00:19:02.207 Removing: /var/run/dpdk/spdk_pid68883 00:19:02.207 Removing: /var/run/dpdk/spdk_pid69031 00:19:02.207 Removing: /var/run/dpdk/spdk_pid70511 00:19:02.207 Removing: /var/run/dpdk/spdk_pid70770 00:19:02.207 Removing: /var/run/dpdk/spdk_pid70917 00:19:02.207 Removing: /var/run/dpdk/spdk_pid72391 00:19:02.207 Removing: /var/run/dpdk/spdk_pid72658 00:19:02.207 Removing: /var/run/dpdk/spdk_pid72805 00:19:02.207 Removing: /var/run/dpdk/spdk_pid74283 00:19:02.207 Removing: /var/run/dpdk/spdk_pid74772 00:19:02.207 Removing: /var/run/dpdk/spdk_pid74912 00:19:02.207 Removing: /var/run/dpdk/spdk_pid75057 00:19:02.207 Removing: /var/run/dpdk/spdk_pid75476 00:19:02.207 Removing: /var/run/dpdk/spdk_pid76200 00:19:02.207 Removing: /var/run/dpdk/spdk_pid76589 00:19:02.207 Removing: /var/run/dpdk/spdk_pid77282 00:19:02.207 Removing: /var/run/dpdk/spdk_pid77717 00:19:02.207 Removing: /var/run/dpdk/spdk_pid78465 00:19:02.207 Removing: /var/run/dpdk/spdk_pid78874 00:19:02.207 Removing: /var/run/dpdk/spdk_pid80832 00:19:02.207 Removing: /var/run/dpdk/spdk_pid81265 00:19:02.207 Removing: /var/run/dpdk/spdk_pid81706 00:19:02.207 Removing: /var/run/dpdk/spdk_pid83770 00:19:02.207 Removing: /var/run/dpdk/spdk_pid84251 00:19:02.207 Removing: /var/run/dpdk/spdk_pid84775 00:19:02.207 Removing: /var/run/dpdk/spdk_pid85827 00:19:02.207 Removing: /var/run/dpdk/spdk_pid86150 00:19:02.207 Removing: /var/run/dpdk/spdk_pid87083 00:19:02.207 Removing: /var/run/dpdk/spdk_pid87405 00:19:02.207 Removing: /var/run/dpdk/spdk_pid88338 00:19:02.207 Removing: /var/run/dpdk/spdk_pid88663 00:19:02.207 Removing: /var/run/dpdk/spdk_pid89340 00:19:02.207 Removing: /var/run/dpdk/spdk_pid89621 00:19:02.207 Removing: /var/run/dpdk/spdk_pid89694 00:19:02.207 Removing: /var/run/dpdk/spdk_pid89736 00:19:02.207 Removing: /var/run/dpdk/spdk_pid89994 00:19:02.207 Removing: /var/run/dpdk/spdk_pid90167 00:19:02.207 Removing: /var/run/dpdk/spdk_pid90260 00:19:02.207 Removing: /var/run/dpdk/spdk_pid90364 00:19:02.207 Removing: /var/run/dpdk/spdk_pid90417 00:19:02.207 Removing: /var/run/dpdk/spdk_pid90448 00:19:02.207 Clean 00:19:02.468 18:59:45 -- common/autotest_common.sh@1453 -- # return 0 00:19:02.468 18:59:45 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:02.468 18:59:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.468 18:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:02.468 18:59:45 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:02.468 18:59:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:02.468 18:59:45 -- common/autotest_common.sh@10 -- # set +x 00:19:02.468 18:59:45 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:02.468 18:59:45 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:02.468 18:59:45 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:02.468 18:59:45 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:02.468 18:59:45 -- spdk/autotest.sh@398 -- # hostname 00:19:02.468 18:59:45 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:02.728 geninfo: WARNING: invalid characters removed from testname! 00:19:24.710 19:00:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:27.250 19:00:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:29.159 19:00:12 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:31.700 19:00:14 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:33.615 19:00:16 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:36.153 19:00:18 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:19:38.061 19:00:21 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:19:38.061 19:00:21 -- spdk/autorun.sh@1 -- $ timing_finish 00:19:38.061 19:00:21 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:19:38.061 19:00:21 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:38.061 19:00:21 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:19:38.061 19:00:21 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:38.061 + [[ -n 5426 ]] 00:19:38.061 + sudo kill 5426 00:19:38.070 [Pipeline] } 00:19:38.093 [Pipeline] // timeout 00:19:38.097 [Pipeline] } 00:19:38.108 [Pipeline] // stage 00:19:38.112 [Pipeline] } 00:19:38.122 [Pipeline] // catchError 00:19:38.129 [Pipeline] stage 00:19:38.130 [Pipeline] { (Stop VM) 00:19:38.141 [Pipeline] sh 00:19:38.424 + vagrant halt 00:19:40.967 ==> default: Halting domain... 00:19:49.113 [Pipeline] sh 00:19:49.397 + vagrant destroy -f 00:19:51.964 ==> default: Removing domain... 00:19:51.977 [Pipeline] sh 00:19:52.262 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:19:52.272 [Pipeline] } 00:19:52.288 [Pipeline] // stage 00:19:52.293 [Pipeline] } 00:19:52.307 [Pipeline] // dir 00:19:52.313 [Pipeline] } 00:19:52.327 [Pipeline] // wrap 00:19:52.333 [Pipeline] } 00:19:52.346 [Pipeline] // catchError 00:19:52.356 [Pipeline] stage 00:19:52.358 [Pipeline] { (Epilogue) 00:19:52.371 [Pipeline] sh 00:19:52.656 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:19:56.872 [Pipeline] catchError 00:19:56.874 [Pipeline] { 00:19:56.887 [Pipeline] sh 00:19:57.173 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:19:57.173 Artifacts sizes are good 00:19:57.183 [Pipeline] } 00:19:57.196 [Pipeline] // catchError 00:19:57.207 [Pipeline] archiveArtifacts 00:19:57.215 Archiving artifacts 00:19:57.320 [Pipeline] cleanWs 00:19:57.333 [WS-CLEANUP] Deleting project workspace... 00:19:57.333 [WS-CLEANUP] Deferred wipeout is used... 00:19:57.341 [WS-CLEANUP] done 00:19:57.343 [Pipeline] } 00:19:57.355 [Pipeline] // stage 00:19:57.361 [Pipeline] } 00:19:57.375 [Pipeline] // node 00:19:57.381 [Pipeline] End of Pipeline 00:19:57.420 Finished: SUCCESS